Table of Contents Author Guidelines Submit a Manuscript
Journal of Robotics
Volume 2014, Article ID 278659, 11 pages
http://dx.doi.org/10.1155/2014/278659
Research Article

Swarm Robot Control for Human Services and Moving Rehabilitation by Sensor Fusion

1Electronic Study Program, State Polytechnic of Sriwijaya, Palembang 30139, Indonesia
2Department of Mechanical Engineering, Toyohashi University of Technology, Toyohashi 441-8580, Japan

Received 19 July 2013; Accepted 11 December 2013; Published 2 February 2014

Academic Editor: Oliver Sawodny

Copyright © 2014 Tresna Dewi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A current trend in robotics is fusing different types of sensors having different characteristics to improve the performance of a robot system and also benefit from the reduced cost of sensors. One type of robot that requires sensor fusion for its application is the service robot. To achieve better performance, several service robots are preferred to work together, and, hence, this paper concentrates on swarm service robots. Swarm service mobile robots operating within a fixed area need to cope with dynamic changes in the environment, and they must also be capable of avoiding dynamic and static obstacles. This study applies sensor fusion and swarm concept for service mobile robots in human services and rehabilitation environment. The swarm robots follow the human moving trajectory to provide support to human moving and perform several tasks required in their living environment. This study applies a reference control and proportional-integral (PI) control for the obstacle avoidance function. Various computer simulations are performed to verify the effectiveness of the proposed method.

1. Introduction

A current trend in robotics is fusing different types of sensors having different characteristics to improve the performance of the robot system and also benefit from the reduced cost of sensors. Sensor fusion is a combination of sensory data that has inherent redundancy and may provide robust recognition of a robot’s working environment [13].

A service robot, which operates either semi- or fully autonomously to perform services useful to human well-being (excluding manufacturing operations [49]), requires sensor fusion to enable it to recognize the environment because it is expected to work in dynamic environments such as human living environment.

One of the applications of service robots is rehabilitation environment. The primary objectives of the rehabilitation robots are to either fully or partially perform tasks that benefit the disabled people and support a rehabilitee’s manipulative function [1012]. Conventional, rehabilitation programs relied heavily on the experience and manual manipulation of the rehabilitator. Because rehabilitation must be conducted carefully and the number of rehabilitee continues to increase, a well-designed service robot may prove effective in providing the support required for careful rehabilitation.

Although most service robots developed thus far are based on a single robot application, to provide better service, multiple service robots working together to create a swarm robot team are preferred, particularly for human services and rehabilitation purposes. The swarm robot may not only support the rehabilitee’s movement but also conduct various tasks such as collecting or transporting certain objects during rehabilitation.

A swarm service mobile robot operating in the human living environment needs to cope with its dynamic changes. Hence, the fundamental function of the robot is to avoid static and dynamic obstacles. Particularly, mobile robots in the swarm team must maintain their velocity and avoid collisions with other swarm mates [1320]. Existing studies have employed proximity [14] and vision sensors [17, 21] for swarm robot obstacle avoidance and the radio frequency identification (RFID) for localization and navigation purposes [2124].

This study presents the collision avoidance control for swarm robots moving in a dynamic environment of moving obstacles. A method is presented for sensor fusion to achieve motion in the dynamic environment. The proposed method combines the information obtained by several proximity sensors, an image sensor, and localization sensors (RFID system). This study applies a leader-follower formation in which the swarm robot team has a leader robot that follows the rehabilitee, while the other robots follow the leader robot. The robot controllers comprise a reference and PI controller. The reference controller generates a robot motion trajectory by referring to sensor information in real-time, and the PI controller makes the robots follow the generated motion trajectory. Various simulation results, which assume the presence of several static and dynamic obstacles in the human living environment, demonstrate the effectiveness of the proposed design.

2. Mobile Robots Dynamics

This study considers a typical two-wheeled differential-drive mobile robot for swarm robots as shown in Figure 1. Notations for Figure 1 and the following equations are given at the end of this section.

278659.fig.001
Figure 1: Two-wheeled mobile robot.

The dynamics of the mobile robot are given by

The translational and angular velocity and acceleration of the mobile robot are represented by the following equations: The motor dynamics of the right and left wheels are given by Substituting (1)–(3) into (4), we obtain the state equations as The notations used in Figure 1 and (1)–(5) are given as follows::moment of inertia of the robots around the center of gravity:mass of the robots:damping coefficient:torque and force applied to the robot to follow the rehabilitee or the leader robot:heading angle of the robots:angular and translational velocities of the robots:angular and translational acceleration of the robots:driving forces for the right and left wheels:half width of the robots:angles of the right and left wheels:angular velocity of the right and left wheels:angular acceleration of the right and left wheels:angle between the proximity sensors:distance data retrieved from proximity sensors and a vision sensor ():moment of inertia of motors:index for right, left, and time, respectively:input voltage to the left and right wheels:driving gain of the motor:wheels radius.

3. Controller Design

This study applies a reference and PI controller for collision avoidance as shown in Figure 2. The reference controller generates a reference trajectory for the leader and follower robots, and the PI controller makes the robots follow the reference trajectory of the leader.

278659.fig.002
Figure 2: Block diagram of proposed control system.
3.1. Reference Controller Design

The environmental information required to create the reference trajectory is provided by fusing multiple sensors: a Kinect sensor, four proximity sensors, and an RFID system. We assume that the RFID system indicates the position of the rehabilitee. The RFID tag attached to the rehabilitee is read by the RFID reader that is attached to the leader, and this helps the leader in tracking and following the rehabilitee by identifying his/her position. The RFID tag is also attached to the leader whose signal is read by the follower, so that it also can track and follow the leader.

Human position is the goal position for the robots. In this study, human position is assumed to be given from the RFID system, and robots follow the human trajectory as the reference trajectory.

When there is no obstacle on the way from the initial position to the goal position, the input torque and force for the robot in (1) and (2), and , are given as follows: where is the desired rotational acceleration and is the desired translational acceleration. They are obtained from the reference (human) trajectory.

The control algorithm for the leader and the follower is the same. The existence of obstacles creates virtual torques and forces (, , , and ) to the dynamics of mobile robots.

Therefore, in the environments where the presence of several static and dynamic obstacles is assumed, we consider the following reference model to generate the reference trajectory: where : virtual damping coefficient and : virtual torque and force to avoid collision ().

Although damping coefficients are not considered in (1) and (2) because they are normally small, and are included in (7) to ensure the stable motion of the robot.

The reference trajectories are created by adding virtual torques and forces (, , , and ) to the dynamics of mobile robots in (7). The virtual torques and forces are calculated by considering the fused data input from the Kinect sensor, proximity sensors, and the RFID system, as shown in Figure 2.

Torques and are designed for collision avoidance and ensuring that the robots move parallel to the virtual wall of the passages, respectively. The force provides the deceleration effect based on the distance of robots to the obstacles, and provides the deceleration effect based on the approaching speed of dynamic obstacles. These four parameters are used to adjust the magnitude of the virtual external force/torque and are calculated as shown below where : sign function of the translational velocity of the robots, : adjusted constants to provide the effective collision avoidance (), : time derivative of the distance to obstacles, which corresponds to the approaching robot’s speed to the obstacles, and : shape function that represents the relationship between the virtual external force and distance to the obstacles; the shape function is 0 when and is 1 when .

The shape function of the distance in Figure 3 is designed as follows: where : design parameters for defining the shape function of the virtual external force and : maximum distance that can be measured by the sensor.

278659.fig.003
Figure 3: Shape function of the distance.

The virtual external force/torque increases as the distance of the obstacle decreases. The virtual external force will be zero when the distance of the robot from the obstacle is greater than , which is a design parameter according to the sensor specification.

3.2. PI Controller Design and Dynamics for Simulation

The PI controller is employed to make the robots follow the desired trajectory. In this study, the PI controller is employed by calculating the difference between the desired and actual velocities. Consider The reference velocities () for the right and left wheels are calculated by where : reference velocities for the right and left wheels, : desired translational and angular velocity, : proportional gain, and : integral gain.

The PI control system design for simulation is presented in Appendix A.

3.3. Stability Analysisy

This study considers the case where the robot is situated inside four walls space as shown in Figure 4; the objective of this arrangement is to investigate the effect of the walls to the robot in order to confirm the stability of the robot system.

278659.fig.004
Figure 4: Stability analysis model.

The distance from the sensors to the walls in Figure 4 is given by where : distance from the mobile robot to the wall, ,: centre position of the robot, and : robot orientation.

The shape function in (9) is approximated as follows: where and    are constants for stability analysis and given by where and are constants for stability analysis and given by In a similar manner, we have where and are assumed.

Considering the dynamics in (1) and related equations, we have where is the velocity of the robot.

Substituting (14), (17), (19), and (20) into (21) results in By notating that we have the following dynamics from (22): From Figure 4, we have where is the orientation of robot in Figure 4 and considered to have small magnitude to analyse the stability in a typical linear manner.

Notating that  , we have where is a matrix derived from (24) as follows: The determinant is given by Because is positive, if the conditions below are satisfied, then the system in (26) is stable [25].

Next, we consider the translational motion of mobile robots. Considering (2) and related equations, we have Substituting (14), (17), (19) and (20) into (30) results in From and notating that , we have where is a matrix derived from (31) as follows: The determinant is If , , , the system in (33) is stable [25].

4. Simulation Results

Computer simulations were performed to verify the effectiveness of the proposed method. Figure 5 shows the initial condition of the human living environment in which several static and dynamic objects exist. It includes tables low enough for the rehabilitee to step over, although these tables are considered to be static obstacles for robots. The passing humans are considered to be the dynamic obstacles that have to be avoided by the robots.

278659.fig.005
Figure 5: The initial condition of the human living environment.

This simulation applies the Kinect sensor to enable the robot to “see” static and dynamic obstacles. Figure 6 shows the simulation result of the Kinect sensor application. The simulation was conducted by taking the distance data input from real Kinect sensor detecting approaching human. The distance data is also compared with that from the proximity sensors. The application of Kinect sensor in Figure 6 shows that when the passing human approaches the robots and the distance from the robot to the human is smaller than the allowed distance, the robots stop when the dwelling period reaches approximately 1500 s.

278659.fig.006
Figure 6: Simulation result of the Kinect sensor application.

Figure 7 shows the computer simulation screenshot, and Figure 8 shows the simulation results that demonstrate the effectiveness of the proposed system. Figure 7(a) shows the rehabilitee moving between tables. Figure 7(b) shows the environment in which the rehabilitee moves randomly.

fig7
Figure 7: Computer simulation screenshot assuming several static and dynamic obstacles in the human living environment.
fig8
Figure 8: Computer simulation results of generated trajectories.

In Figure 8, the green, red, and blue lines are trajectories of the rehabilitee, leader, and follower, respectively. In Figure 8(a), when the rehabilitee walks between tables, some ripples occur for obstacle avoidances. Figure 8(b) shows the result in random setup, in which more ripples are shown. In all these cases, the swarm robots successfully follow the rehabilitee while avoiding obstacles.

5. Conclusion

This study presents the design of a collision avoidance control system for swarm robots moving in an environment that includes moving obstacles. The swarm robots follow the rehabilitee to provide support in performing his/her tasks in a dynamic environment. This study applies a reference and PI controller. The reference controller creates the reference trajectory for the PI controller based on the fused sensor information obtained from the Kinect, proximity sensors and RFID system. The obstacle avoidance trajectory is generated by the reference controller, and the stability of the overall system is analytically verified. Various computer simulations are performed to verify the effectiveness of the proposed method. The rehabilitee was successfully followed by the swarm robots in all situations.

Appendices

A. PI Control System Design for Simulation

We apply the PI controller to (4), and the following dynamics are obtained: where and are reference velocities for right and left wheels.

Substituting (1)–(3) into (A.1) results in where is the integral of the tracking error given by Substituting (A.3) into (A.2), we have Similarly, we have the following dynamics for the left wheel: Equations (A.4) and (A.5) give Then, we have Defining the state vector by , input vector by , and output vector by in (A.8), we employed the following linear dynamics for simulation: where

B. Computer Simulation Screenshot and Simulation Results

Figure 9 shows several environment setups used in this study. Figures 9(a) and 9(b) show the environments without obstacles in which the rehabilitee moves around the rooms and the robots follow him/her. Figure 9(c) shows the rehabilitee moving between tables. Figures 9(d) and 9(e) show environments in which the rehabilitee steps over the tables and robots avoid them while still following him/her. Figures 9(f), 9(g), and 9(h) show the environment in which the rehabilitee moves randomly.

fig9
Figure 9: Environment setups in this study to verify the effectiveness.

Figure 10 shows simulation results of complete environment setup in Figure 10 where the green, red, and blue lines are trajectories of the rehabilitee, leader, and follower, respectively. Figures 10(a) and 10(b) show the setup where no obstacles are found. The resulting graphs show no ripples (evidence of obstacle avoidance) since the mobile robots only consider the rehabilitee-robot distance and the distance between the robots, the rehabilitee just walks around, and robots determine the distance to static and dynamic obstacles. In Figure 10(c), the rehabilitee walks between tables; therefore, some ripples caused by obstacle avoidances are shown. Figures 10(d) and 10(e) show the environmental setup where the rehabilitee steps over tables and the robots follow the human and avoid tables. Figures 10(f), 10(g), and 10(h) show results in random setups, in which more ripples are shown. In all these cases, the swarm robots successfully follow the rehabilitee while avoiding obstacles.

fig10
Figure 10: Simulation results of robot trajectories for the environments in Figure 9.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The authors sincerely appreciate anonymous reviewers for valuable comments and suggestions.

References

  1. J. Llinas and D. L. Hall, “Introduction to multi-sensor data fusion,” in Proceedings of the IEEE International Symposium on Circuits and Systems, pp. 537–540, June 1998. View at Scopus
  2. M. Kam, X. Zhu, and P. Kalata, “Sensor fusion for mobile robot navigation,” Proceedings of the IEEE, vol. 85, no. 1, pp. 108–119, 1997. View at Google Scholar · View at Scopus
  3. L. Jetto, S. Longhi, and D. Vitali, “Localization of a wheeled mobile robot by sensor data fusion based on a fuzzy logic adapted Kalman filter,” Control Engineering Practice, vol. 7, no. 6, pp. 763–771, 1999. View at Google Scholar · View at Scopus
  4. T. Haidegger, M. Barreto, P. Gonçalves et al., “Applied ontologies and standards for service robots,” Robotics and Autonomous Systems, vol. 61, no. 11, pp. 1215–1223, 2013. View at Google Scholar
  5. N. Tschichold-Gürman, S. J. Vestli, and G. Schweitzer, “Service robot MOPS: first operating experiences,” Robotics and Autonomous Systems, vol. 34, no. 2-3, pp. 165–173, 2001. View at Publisher · View at Google Scholar · View at Scopus
  6. Y. Qing-Xiao, Y. Can, F. Zhuang, and Z. Yan-Zheng, “Research of the localization of restaurant service robot,” International Journal of Advanced Robotic Systems, vol. 7, no. 3, pp. 227–238, 2010. View at Google Scholar · View at Scopus
  7. Y.-H. Wu, C. Fassert, and A.-S. Rigaud, “Designing robots for the elderly: appearance issue and beyond,” Archives of Gerontology and Geriatrics, vol. 54, no. 1, pp. 121–126, 2012. View at Publisher · View at Google Scholar · View at Scopus
  8. D. Sun, J. Zhu, C. Lai, and S. K. Tso, “A visual sensing application to a climbing cleaning robot on the glass surface,” Mechatronics, vol. 14, no. 10, pp. 1089–1104, 2004. View at Publisher · View at Google Scholar · View at Scopus
  9. K. Kim, M. Siddiqui, A. Francois, G. Medioni, and Y. Cho, “Robust real-time vision modules for a personal service robot,” in Proceedings of the 3rd International Conference on Ubiquitous Robots and Ambient Intelligence (URAI '06), 2006.
  10. M.-S. Ju, C.-C. K. Lin, D.-H. Lin, I.-S. Hwang, and S.-M. Chen, “A rehabilitation robot with force-position hybrid fuzzy controller: hybrid fuzzy control of rehabilitation robot,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 13, no. 3, pp. 349–358, 2005. View at Publisher · View at Google Scholar · View at Scopus
  11. C. Daimin, Y. Miao, Z. Daizhi, S. Binghan, and C. Xiaolei, “Planning of motion trajectory and analysis of position and pose of robot based on rehabilitation training in early stage of stroke,” in Proceedings of the International Conference on Computer, Mechatronics, Control and Electronic Engineering (CMCE '10), pp. 318–321, August 2010. View at Publisher · View at Google Scholar · View at Scopus
  12. E. Akdoǧan and M. A. Adli, “The design and control of a therapeutic exercise robot for lower limb rehabilitation: physiotherabot,” Mechatronics, vol. 21, no. 3, pp. 509–522, 2011. View at Publisher · View at Google Scholar · View at Scopus
  13. G. C. Pettinaro, I. W. Kwee, L. M. Gambardella et al., “Robotics: a different approach to service robot,” in Proceeding of the 33rd International Symposium on Robotics, October 2002.
  14. A. E. Turgut, H. Çelikkanat, F. Gökçe, and E. Şahin, “Self-organized flocking in mobile robot swarms,” Swarm Intelligence, vol. 2, no. 2–4, pp. 97–120, 2008. View at Publisher · View at Google Scholar · View at Scopus
  15. N. Xiong, J. He, Y. Yang, Y. He, T.-H. Kim, and C. Lin, “A survey on decentralized flocking schemes for a set of autonomous mobile robots,” Journal of Communications, vol. 5, no. 1, pp. 31–38, 2010. View at Publisher · View at Google Scholar · View at Scopus
  16. R. Havangi, M. A. Nekoui, and M. Teshnehlab, “A multi swarm particle filter for mobile robot localization,” International Journal of Computer Science Issues, vol. 7, no. 3, pp. 15–22, 2010. View at Google Scholar
  17. B. Eikenberry, O. Yakimenko, and M. Romano, “A vision based navigation among multiple flocking robots: modeling and simulation,” in Proceedings of the AIAA Modeling and Simulation Technologies Conference, pp. 1–11, Keystone, Colo, USA, August 2006. View at Scopus
  18. R. Olfati-Saber, “Flocking for multi-agent dynamic systems: algorithms and theory,” IEEE Transactions on Automatic Control, vol. 51, no. 3, pp. 401–420, 2006. View at Publisher · View at Google Scholar · View at Scopus
  19. K. Ishii and T. Miki, “Mobile robot platforms for artificial and swarm intelligence researches,” International Congress Series, vol. 1301, pp. 39–42, 2007. View at Publisher · View at Google Scholar · View at Scopus
  20. H. Lee, E.-J. Jung, B.-J. Yi, and Y. Choi, “Navigation strategy of multiple mobile robot systems based on the null-space projection method,” International Journal of Control, Automation and Systems, vol. 9, no. 2, pp. 384–390, 2011. View at Publisher · View at Google Scholar · View at Scopus
  21. T. Germa, F. Lerasle, N. Ouadah, and V. Cadenat, “Vision and RFID data fusion for tracking people in crowds by a mobile robot,” Computer Vision and Image Understanding, vol. 114, pp. 641–651, 2010. View at Publisher · View at Google Scholar · View at Scopus
  22. T. Tammet, J. Vain, and A. Kuusik, “Distributed coordination of mobile robots using RFID technology,” in Proceeding of the 8th WSEAS International Conference on Automatic Control, Modeling and Simulation, pp. 109–116, March 2006.
  23. K. Prathyusha, V. Harini, and S. Balaji, “Design and development of a RFID based mobile robot,” International Journal of Engineering Science & Advanced Technology, vol. 1, no. 1, pp. 30–35, 2011. View at Google Scholar
  24. S. Park and S. Hashimoto, “Indoor localization for autonomous mobile robot based on passive RFID,” in Proceeding of the IEEE International Conference on Robotics and Biomimetics, pp. 1856–1861, Bangkok, Thailand, February 2009. View at Publisher · View at Google Scholar · View at Scopus
  25. R. Sigal, “Algorithms for the Routh-Hurwitz stability test,” Mathematical and Computer Modelling, vol. 13, no. 8, pp. 69–77, 1990. View at Google Scholar · View at Scopus