Table of Contents Author Guidelines Submit a Manuscript
Journal of Control Science and Engineering
Volume 2019, Article ID 2135914, 12 pages
https://doi.org/10.1155/2019/2135914
Review Article

Challenges for Novice Developers in Rough Terrain Rescue Robots: A Survey on Motion Control Systems

Center for Biomedical and Robotics Technology (BART LAB), Department of Biomedical Engineering, Faculty of Engineering, Mahidol University, Salaya, Thailand

Correspondence should be addressed to Jackrit Suthakorn; ht.ca.lodiham@tus.tirkcaj

Received 4 March 2019; Accepted 5 May 2019; Published 2 June 2019

Guest Editor: Masoud Abbaszadeh

Copyright © 2019 Branesh M. Pillai and Jackrit Suthakorn. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Several researchers have revealed the huge potentials of rescue robots in disaster zones. In addition to searching for victims, these intelligent machines are also effective in obtaining useful information from the zones. These functions help to optimize the search and rescue missions. However, the fact that rescue robots have to operate in risky and dangerous environments necessitates the need for such machines to have an efficient motion control system, which can help them to operate autonomously or with minimal human control. This paper reviews the use of reliable controllers in enhancing the sensing capabilities of rescue robots. Huge potential of sensorless sensing method in the rescue robots are highlighted. It is shown that the use of sensorless sensing method enables developer to create simple and cheaper robots for various complex situations. Thus, it is imperative to conduct further studies on how to optimize the operations of robots that lack sensors.

1. Introduction

Despite the huge technological advancement that has been recorded over the years, disaster still remains a recurrent issue in several parts of the world today. These disasters either occur naturally or are simply man-made. In most cases, disasters necessitate massive rescue mission. Most of such rescue missions involve teams of humans, who are easily overwhelmed and in dire need of additional support. At times, the members of the rescue teams have to work in very hostile and very dangerous situations. Over the years, robots and artificial intelligent systems have demonstrated huge reliability in such areas. These unique machines can be used for searching victims as well as obtaining useful information that can enhance or optimize the search and rescue missions. In other words, rescue robots have emerged as suitable options in most dangerous rescue missions. Currently, majority of rescue robots have the capacity to operate in high risk situations as well as extreme terrain. This explains why such systems are specifically designed to be robust and strong.

The fact that rescue robots have to operate in risky and dangerous environments necessitates the need for such machines to operate autonomously or with minimal human control. Thus, many of these systems come with wide range of applications such as wireless sensor networks (WSNs) [1], microassembly [2], medicine [3], urban search and rescue [4], wilderness search, and rescue and surveillance [5, 6]. Macwan et al. observed that the condition among individual robots can be enhanced by establishing a centralized control of the systems [7]. But in reality, controlling the motion of robots can be quite arduous and challenging. This challenge is more evident when the machines are required to exhibit nonlinear behavior. Other factors responsible for such challenges include low computational power, misalignment during manufacturing, and limited self-sensing capabilities.

The sensing capabilities of robots can be improved through the use of more reliable controllers. According to Sosa-Cervantes et al. [8], two basic types of controllers have already been proposed for robot motion. The authors identified these as dynamics-based controllers and kinematics-based controllers [9, 10]. Many empirical studies on motion control systems for rescue robots are based on either of these options. However, it has been observed that dynamic controllers perform more optimally than kinematic controllers [11]. Bessas et al. attributed this superior performance to the fact that dynamic controllers require real-time data on the behavior as well as state of the robot [12]. Unfortunately, this feature makes it useless in situations where the information on the robot’s state is limited and characterization of the robot dynamics is quite difficult.

Nevertheless, recent studies have revealed that equipping robots with sensors makes them more effective in search and rescue missions. According to Sheh et al. [13], such robots can acquire relevant information from the affected region and relay them back to rescuers through a reliable communication system. Through this way, the rescuers become fully aware of the prevailing situation in the zone and consequently take the relevant precautions. In this review paper, the researcher presents a review on motion control systems for rescue robots. First and foremost, a discussion on the historical trends of motion control systems in robotics is presented, after which the author looks at the importance of sensorless sensing method for robot as well as current research challenges in motion control systems for rescue robots.

2. Historical Trend of Motion Control Systems in Robotics

The history of the relationship between motion control system and robotics can be traced to more than 50 years ago. In this section, we first review this relationship, with major focus on how control theory has proper solutions to some issues encountered in robotics as well as how newer problems have triggered the development of new control theory. Historically, robotics was dominated by machine tool industry. During this early stage, the philosophy in the design of robots was for the design mechanisms to be as stiff as possible with each axis (joint) controlled independently as a single-input/single-output (SISO) linear system. Simple tasks like spot welding and transfer of materials are easily executed through the application of point-to-point control. Some complex operations like spray painting and welding are made possible by applying continuous-path tracking. The use of sensors at this stage was either nonexistent or highly limited [14]. But the execution of advanced tasks like assembly will not be possible without the regulation of moments and contact forces. Additionally, Spong and Fujita also pointed out that higher payload-to-weight ratios and higher speed operations are essential for adequate understanding of the complex, interconnected nonlinear dynamics of robots [14]. It is this requirement that prompted the development of motion control systems that are based exclusively on theoretical results in robust, nonlinear, and adaptive control. This feature enables rescue robots to carry out sophisticated operations. Currently, the motion control systems of most robots are highly advanced [15]. In most cases, these motion control systems are also integrated with vision and force systems.

3. The Early Years

The first ever industrial robot named “Unimate” was developed in 1961. By 1970s, several Japanese and European firms had entered the market. At this early stage, the motion control systems of the robots were based mainly on the manipulator arms. Thus, their practical applications were limited to such simple tasks like paintings, wielding, and material handlings. From the control technological perspective, the major challenges experienced by developers include lack of quality sensors, high cost of computation, inadequate understanding of robot dynamics.

According to Spong and Fujita [14], despite the above-stated challenges, the early developers were driven to advance the robot control systems. This motivation is based on two factors. The first motivation is the discovery of the close relationship that exists between automatic control and robot performance. This discovery prompted increased interest on adequate understanding of the architecture and dynamics as well as system-level design. However, studies on these features are confronted with a number of limitations. For instance, the control schemes used in those studies were mainly based on approximate linear model. In other words, such approaches failed to utilize the robot’s natural dynamics. Additionally, the developers failed to adequately integrate force and vision control into the system’s control system design and mechanical design as well as overall motion control architectures.

The second factor responsible for the motivation to advance the robot control systems during early years was Moore’s Law, which was exogenous to robotics and the controls communities. The development and consequent implementation of advanced control were made possible by increasing speed and minimizing the cost accrued through computation. It is very important to note that these advanced controls were based exclusively on sensors. The two main different control methods that emerged at this era were experimented for use in creative ideas and many innovative applications for robots. It is imperative to note that many of these actually influenced research on control systems generally. A notable example is the study conducted by Markiewicz on inverse dynamics control and computed torque [17]. During this early stage, research on robotic control always involved mathematical determination of the computational burden of the implementation.

4. Control of Manipulators

By the mid-1980s, robot manipulators emerged as the main motion control systems for autonomous devices. It consequently became the focus for many researches. For instance, early studies on inverse dynamics and computed torque by researchers like Markiewicz motivated Hunt et al. to develop the differential geometric method of feedback linearization. The differential geometric method of feedback linearization was found to be particularly helpful in dealing with numerous practical issues that exist both within and outside the area of robotics [17]. In another study, Tarn et al. noted that feedback linearization is equivalent to the inverse dynamics method. However, Spong observed that the application of feedback linearization method in robots led to the issue of joint flexibility in the manipulators [18]. In fact, the problem has been identified as the major factor militates against optimal performance of manipulators. Nevertheless, it still remains a crucial element of robot control and dynamics.

Researchers also conducted a number of studies on connections with robust control. These researches revealed that exact cancellation of nonlinearities is the major factor that determines feedback linearization. This raised the issue of robustness to parameter. Even the Standard H lacks the ability to overcome this issue as a result of the constant nature of uncertainty. Consequently, Spong and Vidyasagar came up with a solution for the special case of second-order systems [18]. The solution, which was based on the small-gain theorem, eventually led to the development of a new motion control system known as L1-optimal control. The L1-optimal control is a good practical example of the contribution made by robotics control in new control theory. Additionally, Spong and Fujita also identified other methods of robust control as Lyapunov and sliding modes methods. These also pose great problem for robot manipulators [14].

Adaptive control was first introduced during the mid-1980s. Like in the previous cases, various researchers also investigated its applications in robot manipulators. A major breakthrough in this aspect was first recorded by Slotine and Li [19]. The authors pointed out that issues concerning adaptive control can be resolved by taking note of the skew-symmetry feature of the robot inertia matrix as well as linearity in the inertia parameters. These two features are characteristics of Lagrangian dynamical systems. Perhaps, the more prominent of these two properties is the skew-symmetry, which was found to relate to fundamental property of passivity. Consequently, Ortega and Spong introduced the concept of passivity-based control as a contextual component of adaptive control of manipulators. Currently, passivity-based control is one of the prominent designs used in various engineering applications, including the development of robots [20].

Teleoperation is another remarkable trend recorded during this stage of the evolution of motion control systems. The concept simply refers to the practice of controlling robotic manipulators remotely. However, this innovative approach was confronted by the need to take care of the delays usually encountered during its usage. These include the delay experienced during the relay of command from the operator to the manipulators, as well as communication of sensory feedback. Delay in time can trigger instability, especially among bilateral teleoperators [21]. Nevertheless, Anderson and Spong as well as Niemeyer and Slotine used the passivity-based control to record a breakthrough and consequently attained a delay-independent stabilization of bilateral teleoperators [19]. The main idea that was employed in both breakthroughs was the representation of master-slave teleoperator system as an interconnection of two-port networks, after which the researchers then programmed the force and velocity signals as scattering variables, before relaying them across the network. Through this way, the time-delay network is rendered passive, thereby leading to the stabilization of the whole system. A notable example of a teleoperated robot is the Dan Vinci surgical systems, developed by Intuitive Surgical. This system involved an integration of miniature cameras, micromanipulators, and master-slave control system that makes it possible to operate surgery through a console with a 3D video feed and hand and foot controls. Teleoperation does not support remote operation and force feedback, which are essential for normal operation of rescue robots [21].

5. Mobile Robots

The initial attempt to develop mobile robots was confronted by the issue of kinematic control, which was recognized as an application of differential geometric method as far back as the 1980s. The issue was depicted in Brockett’s theorem, which revealed such system does not have any smooth time-invariant stabilizing control laws [22]. It was this theory that eventually triggers the development of alternative control methods. For instance, the development of time-varying approaches to stabilization of nonholonomic systems and hybrid switching control are all motivated by Brockett’s theorem. The alternative control method forms the basis for current mobile robots, used in various applications including rescue missions. One good example is the robots used in rescue mission after earthquake as well as in mines. Additionally, some mobile robots used the alternative control methods for bomb detection. Apart from rescue mission, some robots are also used in research, such as the mobile robots currently exploring Mars. Some are also developed for consumer applications.

6. The Potentials of Sensorless Sensing Method for Rescue Robot

During rescue missions, the robot manipulator is required to operate in an unstructured environment. In course of such mission, the system is most likely to share its workspace with human. Heinzmann and Zelinsky noted that, under such condition, safety becomes paramount issue of concern [23]. This is because there is always the possibility of encountering accidental collision between the robot and other rescuers, which might result in severe injuries. Conditions that can lead to such occurrences include unpredicted relative motion and uncertainty over the exact location of some obstacles. Jimenez et al. noted that robots can only avoid such accidents, if they have knowledge of the local environment geometry [24]. The use of computationally intensive motion planning techniques is also necessary for dealing with this issue.

In real-life scenario, robots can detect imminent collision by using additional external sensors. Examples of these sensors are sensitive skins (Hirzinger et al. 2001), on-board vision (Ebert and Henrich, 2002), strain gauges (Garcia et al., 2003), force load cells, etc. However, there are certain disadvantages of using sensors [2527]. For instance, Bicchi and Tonietti noted that many sensors are fragile [28]. This has the potential of limiting their efficiency and reliability. Despite this, the authors noted that sensors are relatively costly and can reduce the robot’s maximum payload. The latter is particularly more evident in cases where redundant robots are used. Force response is very important in the determination of the best design for a robot control system. But in recent years, force sensors have been incorporated into various industrial applications just for assessment of external force. But this type of sensor also has some limitations. For instance, Mitsantisuk et al. observed that force sensors tend to give low bandwidth of assessment [29]. The authors noted that even measured force signal has high noise. These demerits are capable of reducing the system’s performance and robustness. As a result of these issues, the authors proposed the use of Disturbance Observer (DOB) for the estimation of external force.

The demerits associated with sensor have prompted many researchers to conduct studies on potentials of sensorless sensing method for rescue robot. Many of these studies focus on the best technique to employ in getting rid of collision when using rescue robots. For instance, in a study titled “A Universal Algorithm for Sensorless Collision Detection of Robot Actuator Faults,” Chen et al. came up with a universal algorithm for a sensorless collision detection of robot [30]. This control system differs from the normal control algorithm significantly. Using the dynamic model, the researchers came up with a classical friction model that is specifically aimed at boosting the accuracy of the entire dynamic model. The collision detection algorithm developed by these teams was able to achieve real-time detection without the assistance of any external sensors. In order to achieve this, the system merely evaluates the motor current and data on location. The data were generated by the encoder situated at the robot’s joint. The threshold was then compared to the value of external torque in order to detect collision. The system was found to be very effective in the detection of collision, even without any sensors. Apart from its effectiveness, the algorithm is quite easy to use and can be applied to any type of robot arm with more degree of freedom. This collision detection algorithm depends mainly on the friction and dynamic models for functioning, which means that it can be applied to any rescue task. The friction and dynamic models are the major determinants for system accuracy [31].

In another similar study, Luca and Mattone also improved the motion control system of a robot without any sensors [32]. In the study, Sensorless Robot Collision Detection and Hybrid Force/Motion Control, by Luca and Mattone, the idea was to take any collision that occurred at any point of the robot as a fault of its actuating system. The researchers used the fault detection and isolation technique, which does not require force measurements or acceleration. After the detection of a collision, the system switches over to an alternative hybrid force/motion controller, which then slides on the obstacle while still maintaining contact. Through this way, the interaction force is then regulated and the overall quality of the motion control system is improved.

In “Sensorless Kinesthetic Teaching of Robotic Manipulators Assisted by Observer-Based Force Control,” Capurso et al. came up with a dynamic and real lead-through programming (LTP) algorithm [33]. As an LTP, the system lacked force sensors but has a force-feedback control loop. In this particular study, the architecture of the motion control system comprises force observer and admittance control. The former is majorly concerned with the provision of estimated external torques. This estimated value for the external torque is then input to admittance control. The researchers used Kalman filter as the observer and this is absolutely free from every numerical differentiation and also matrix inversion. Most motion control systems that are based on generalized momentum formulation lacked the ability to detect forces at zero velocity. In order to mitigate this issue, Capurso et al. applied dithering torque at high frequency [33]. The researchers were able to achieve this by minimizing static friction that is normally encountered on the motor side of the gear box. Through this way, the robots were ready for motion. Despite the absence of sensors, the researchers were able to fulfill their objectives in regard to sustaining stability during collision and easier handling of the system. Capurso et al. also identified one major benefit of active LTP over passive approach as the latter’s ability to minimize interaction forces by nearly 50%. The use of LTP in the development of motion control systems enables the modification of the manipulator’s behavior.

In another study titled “Sensorless Collision Detection and Contact Force Estimation for Collaborative Robots Based on Torque Observer,” Tian et al. noted that the ability of a robot to detect collision and enhance contact force can be enhanced even without any extra sensors. Prajumkhaiy and Mitsantisuk also showed that the use of sensorless force estimation can help improve stability and performance of the robotic system. Thus, the authors in their study titled “Sensorless Force Estimation of SCARA Robot System with Friction Compensation” recommended the use of such approach in robotic design [34].

In “Sensorless Friction-Compensated Passive Lead-Through Programming for Industrial Robots,” Stolt (2015) also demonstrated the potential of sensorless robot [35]. It is well known that programming in robotics not only must be simple but also should be quick. This will help to quickly develop new robots, when the operational environment changed unexpectedly. This important objective can be achieved easily by using the LTP method, in which the user has to guide the robot manually. In this study, Stolt presented a sensorless approach for achieving this important objective. The method employed in this particular case relied mainly on disabling of low-level joint controllers. The researchers also took gravity compensation into consideration. The authors were able to record increased performance in this particular study.

In summary, sensorless sensing method actually has huge potentials in the robotic world. Apart from the fact that the use of sensorless sensing method enables developer to create simple and cheaper robots, research has shown that such robots are useful in various complex situations. Thus, it is quite imperative to conduct further studies on how to optimize the operations of robots that lack sensors.

7. Empirical Studies on Motion Control System of Rescue Robot

A number of researchers have strived to throw more light on the nature of motion control systems of robots involved in search and rescue mission. Many of these studies have succeeded in revealing the several potentials of mobile robots. For instance, Ruangpayoongsak et al. in their study observed that motion control system of many robots enable them to carry out search and rescue operations [36]. These include their ability to operate in area of low visibility and dangerous areas, search for accident victims, provide sensors for mapping, and follow humans during fire outbreak. This particular study confirmed the capability of rescue robots to enhance the quality of search and rescue mission. This improved quality manifest mainly in form of increasing the area of coverage and speed as well as lowering the dangers encountered in such rescue mission. Ruangpayoongsak et al. also noted that the conduction of large-scale complicated search and rescue mission can be effective when autonomous rescue robots were used.

Zhao et al. present a unique search and rescue robot that can operate in very difficult situations like underground coal mine [37]. The system used in this particular case consists of three units, namely, mobile robot with waterproof function, another mobile robot with explosion-proof, and an operating control unit. The remote-control system integrated into the robot enables it to map the area of disaster, collect useful information, and forward signals of any danger to the rescuers. Thus, it is more of a multifunctional sensor. The search and rescue robot developed by Zhao et al. exhibited ability to operate in difficult terrain.

The ability of robot to operate in difficult terrain was also demonstrated by Kazuyuki and Haruo [38]. The rescue robot used in this particular case is a multicrawler robot that can autonomously operate in a relatively large area. Kazuyuki and Haruo observed that such robots are especially useful in rescue operations in which there is issue of staff shortage. The authors also pointed out that autonomy of a robot can be boosted by increasing its operability and mobility. The practical aspect of the study revealed the robot’s ability to adapt to difficult terrain of the environment without the need for a complex motion control systems.

Currently, majority of rescue robots are being teleoperated. However, there has been increased research on introducing the type of motion control system that will enable robot to operate more autonomously. Some authors have noted that increasing the autonomy of rescue robots will allow them to operate more autonomously. For instance, Mourikis et al. and Steplight et al. observed that such feature enables robots to autonomously climb stairs [39, 40], while Okada et al. noted that improving robot’s autonomy increases its ability to navigate over uneven terrain [41]. Thus, enhancing the ability of robots to operate in difficult terrain requires the incorporation of motion control systems that enable them to operate more autonomously. In catastrophic disasters, in order to save victims, a wide area must be explored within a limited time. Thus, many rescue robots should be employed simultaneously. However, human interfaces of previous rescue robots were complicated, so that well-trained professional operators were needed to operate the robots and, thus, to use many rescue robots, many professional operators were required. However, in such catastrophic disasters, it is difficult to get many professional operators together within a short time. Thus, Ito et al. pointed out the necessity for rescue robots which can be operated easily by nonprofessional volunteer staff [42]. To realize a rescue robot which can be operated easily, they proposed a rescue robot system which has a human interface seen in typical, everyday vehicles and a snake-like robot which has mechanical intelligence. They demonstrated the validity and the effectiveness of the proposed concept by developing a prototype system.

8. Our Experience on Rescue Robot at BART LAB

The rescue robot team at the Center for Biomedical and Robotics Technology (BART LAB) presently have been building rescue robot for the past 10 years and have successfully deployed robot in different conditions, either to test the robot capability in robot competition or help rescue team in real situation [43].

The team consists of 15 members and two robots. The first is a rough terrain robot called TeleOp VI, which performs two functions (Teleoperative and Autonomous function), and the second is an aerial robot called AerialBot I which is introduced with a lightweight mapping and a vital sensing system. The team is constantly researching and developing robots and has participated in regional robot competitions since 2006. To improve search and rescue plan for rescue team, the rescue robot in the disaster area should find the path search and collect information. Disaster area in most of the cases is an uneven surface and the rescue robots have to navigate in the terrain during the rescue operation. Manipulating mobile rescue robots on an unknown path with presence of strange objects or terrain surface on its path is a challenging task. Most of the robots use flippers to navigate on terrain surface and manipulating arm gripping approaches to move through such obstacles on its path. Figure 1 shows a typical BART LAB rescue robot performing mobility task with ease. This rescue robot consists of four independent flippers for middle track mobility for moving forward backward, and manipulator.

Figure 1: BART LAB Rescue robot performance during “World Robot Summit 2018”: Disaster Robotics Category, Rescue league.

Here, measurement of the control output, preferred closed loop dynamics, and an obstacle to pushed in an unknown pathway is considered [44]. Estimating the change of acceleration and the robot’s live displacement point is important to control the rescue robot on its looked-for pathway. Observer-Based Controller (OBC) is used to calculate the varying acceleration and the contact point when the BART LAB rescue robot is maneuvering on the unknown pathway [45]. OBC evaluates and compensates both the varying acceleration and robots position using torque observer and the predictable torque based on sensorless control method [46].

9. NIST Standard Testing Methods for Rough Terrain Robots

The main purpose of using robots in emergency response operations is to enhance the safety and effectiveness of emergency responders operating in hazardous or inaccessible environments. The National Institute of Standards and Technology (NIST) has developed a standard test method to describe, in a statistically significant way, how reliably the robot is able to traverse the specified types of terrains, thus providing emergency responders sufficiently high levels of confidence to determine the applicability of the robot [47]. The performance data captured within this test method are indicative of the testing robot’s capabilities.

This test apparatus is scalable to constrain robot maneuverability during task performance for a range of robot sizes in confined areas associated with emergency response operations. Variants of the apparatus provide minimum lateral clearance of 2.4 m (8 ft.) for robots expected to operate around the environments such as cluttered city streets, parking lots, and building lobbies; minimum lateral clearance of 1.2 m (4 ft.) for robots expected to operate in and around the environments such as large buildings, stairwells, and urban sidewalks; minimum lateral clearance of 0.6 m (2 ft.) for robots expected to operate within the environments such as dwellings and work spaces, buses and airplanes, and semicollapsed structures; minimum lateral clearance of less than 0.6 m (2 ft.) with a minimum vertical clearance adjustable from 0.6 m (2 ft.) to 10 cm (4 in.) for robots expected to deploy through breeches and operate within subhuman size confined spaces voids in collapsed structures.

Our approach toward developing mobility test methods has relied upon well-defined apparatuses to differentiate robot capabilities and typically use the time to negotiate a specified obstacle or path, or the total distance traversed, to measure performance [48]. These mobility tests are always conducted with a remote operator station, out of sight and sound of the robot but within communication range, to emphasize the overall system performance.

10. Importance of RoboCup Rescue Robot League

The RoboCup Rescue Robot League (RRL) is an international league of teams with one objective: foster the development and demonstration of advanced robotic capabilities for emergency responders using annual competitions to evaluate and teaching camps to disseminate best-in-class robotic solutions [13]. The league hosts annual competitions to (1) increase awareness of the challenges involved in deploying robots for emergency response applications such as urban search and rescue and bomb disposal, (2) provide objective performance evaluations of mobile robots operating in complex yet repeatable environments, and (3) promote collaboration between researchers. Robot teams demonstrate their capabilities in mobility, sensory perception, localization and mapping, mobile manipulation, practical operator interfaces, and assistive autonomous behaviors to improve remote operator performance and/or robot survivability while searching for simulated victims in a maze of terrains and challenges. The RRL has been held since 2000. The arenas in this competition resemble partly collapsed buildings, with obstacles consisting of standardized and prototypical apparatuses from the DHS-NIST-ASTM International Standard Test Methods for Response Robots [4951]. The experience gained during these competitions has increased the level of maturity of the field, which allowed deploying robots after real disasters. In 2008 Thailand Rescue Robot Championship (TRR2008), BART LAB team was one of the 8 finalist teams from 80 plus participating teams and received the Best-In-Class award for its autonomous robot. In early 2009, we attended the RoboCup Japan Open 2009 in the Rescue League with 10 Japanese teams, where the team received second place. Figure 2 shows BART LAB Rescue robot performance, during RoboCup 2015: Rescue league, victim search.

Figure 2: BART LAB Rescue robot victim search during RoboCup 2015: Rescue league.

RoboCup-rescue intends to promote research and development in this significant domain by creating a standard simulator and a forum for researchers and practitioners. While the rescue domain is intuitively appealing as large-scale multiagent domains, it has not yet given thorough analysis on its domain characteristics. Kitano et al. presented detailed analysis on the task domain and elucidated characteristics necessary for multiagent systems for this domain. Robocup-rescue consists of a simulator league and a real robot league [52]. The simulator league focuses on strategy planning and team coordination, whereas the focus of the real robot league will be on capability of individual robots in rescue operation, and how those robots collaborate to accomplish specific tasks. Takahashi et al. designed a RoboCup-Rescue disaster simulator architecture based on the Hanshin-Awaji Earthquake, which aimed to simulate large urban disasters and rescue agents’ activities [53]. The simulator supports both simulation of heterogeneous agents such as fire fighters and victims’ behaviors and interface to disaster’s environments in the real world. It was a comprehensive urban disaster simulator into which a new disaster simulator or rescue agents can be easily plugged.

11. How Rescue Robot Works in Real Time

Although the term “rescue robot” seems as if the robot will be performing the actual rescue act, in reality, such robots have never been put to practical use in securing the safety of disaster victims [54]. Instead, the rescue has been conducted by members of the rescue team. A rescue robot is referred to as a search and rescue robot equipped with cameras and various sensors to determine the situation in the disaster area. Search and rescue robots can be put to effective use prior to the actual rescue work, in determining the conditions of the disaster site that cannot be accessed by members of the rescue team due to confinement or contamination.

For coordination of rescue robots for real-time exploration over disaster areas, Sugiyama et al. developed a coordination procedure for a multirobot rescue system that performs real-time exploration over disaster areas [55]. Real-time exploration means that every robot exploring the area always has a communication path to human operators, standing by a monitor station, and the communication path is configured by ad hoc wireless networking. The system procedure of Sugiyama et al. consists of autonomous classification of robots into search and relay types and behavior algorithms for each class of robot. Search robots explore the areas and relay robots act as relay terminals between search robots and monitor station. The rule of the classification and the behavior algorithm refers to the forwarding table of each robot constructed for ad hoc networking. Computer simulations are executed with the decision-theoretic approach as the exploration strategy of search robots [55].

The real-time recognition of terrains in front could effectively improve the ability of pass for rescue robots. Zhong et al. presented a real-time terrain classification system by using a 3D light detection and ranging (LIDAR) on a custom designed rescue robot [56]. First, the LIDAR state estimation and point cloud registration are running in parallel to extract the test lane region. Secondly, normal aligned radial feature (NARF) is extracted and downscaled by a distance-based weighting method. Finally, an extreme learning machine (ELM) classifier is designed to recognize the types of terrains. Their experimental results demonstrated the effectiveness of the proposed system.

On August 11, 2014, U-place condo tale, the six-floor building under construction, collapsed in Pathum Thani, Thailand. There were a number of injured people trapped in the collapsed building. BART LAB Rescue Robotics team was called by the rescue team to join the survey and rescue mission on site. The top floor of the building was under construction and collapsed into the sandwich structure. Some of the injured were trapped at different depths that were difficult to access from the outside. BART LAB Rescue Robot is designed to operate in rough and complex terrain. However, the height of the robot is 60 cm, which limits the regions the robot is able to gain access to. During the operation, the rescue team made the hole to access 3 to 4 floors to locate survivors. The preobservation was possible to indicate a survivor. BART LAB Rescue Robot was assigned to survey the scene and provide more information on the location of survivors and the structure of the collapse.

The robot was remotely operated from the outside station and passed through the 6th floor to the 4th floor. The hole became narrower and lower; additional obstacles included the steel rods that reinforce the concrete structure. Due to these major obstacles, the movement of robot was limited. However, this is the first mission that BART LAB Rescue Robotics team experienced as part of an on-site operation (Figure 3). The collaboration with the rescue team provided the team with valuable feedback for future improvement and development. Our ultimate goal is to produce a reliable rescue robot, through research and development, for application in a real disaster site around the world.

Figure 3: On-site experienced at U-place condo, Pathum Thani, Thailand.

12. Safety Security and Rescue Robotics (SSRR)

For the utilization of service robots, demonstrative experiments in real situations are necessary. Through the experiments, one can verify the effectiveness of research results and also can find new research issues. However, the legal systems and safety guidelines for the demonstrative experiments of the service robots are not established well. Hence, it is difficult to carry out the experiments effectively, which could degrade research progress for service robot utilization. Therefore, based on a case study in “Real World Robot Challenge,” which is a demonstrative experiment for autonomous mobile robots in outdoor public space, Igarashi et al. proposed some basics of safety guidelines for the demonstrative experiments for mobile service robot. Specifically, the issues which must be considered in the public space demonstrative experiments and their explicit framework are clarified based on the international safety standards. In addition, Igarashi et al. proposed a risk assessment method and a risk management system which are easy to use by the researchers in the experiments, in order to come up with feasible protective measures for the risks that could consequently accelerate the utilization of service robots [57].

Safety, security, and rescue robotics (SSRR) is an important application field that can be viewed as a prototypical example of a domain where networked mobile robots are used for the exploration of unstructured environments that are inaccessible or dangerous for humans [58]. Teleoperation, based on wireless networks, is much more complex than what one might expect at first glance because it goes well beyond mere mappings of low-level user inputs-like joystick commands to motor activations on a robot. Teleoperation for SSRR must move up to the behavior and mission levels where a single operator triggers short-time, autonomous behaviors, respectively, and supervises a whole team of autonomously operating robots. Consequently, a significant amount of heterogeneous data, video, maps, goal points, victim data, and so on, must be transmitted between robots and mission control. Birk et al. presented networking framework for teleoperation in SSRR [16, 59, 60]. The networking framework covers three different teleoperation stages ranging from motion (stage 1), to behavior (stage 2), to mission-level teleoperation (stage 3) (Figure 4). The framework proved its usefulness in different field tests, including the ELROB, as well as in controlled experiments in high-fidelity simulation where it can be shown that a multirobot network, supervised by a single operator, indeed is beneficial.

Figure 4: Teleoperation stages for safety, security, and rescue robotics (SSRR). All the three levels in many application scenarios are also interlinked with each other and involve a significant amount of heterogeneous traffic over wireless links (reproduced from Birk et al.) [16].

13. Conclusion and Gap in Literature

In conclusion, several studies and research have clearly revealed the potentials of robots to operate in search and rescue missions. They are specifically effective in searching for and obtaining useful information that can enhance or optimize the search and rescue missions. However, the fact that rescue robots have to operate in risky and dangerous environments necessitates the need for such machine to operate autonomously or with minimal human control. This is being achieved currently, through increased use of sensors. Unfortunately, sensors are fragile and can still increase the cost of the final products. However, this academic review has demonstrated the huge potential of sensorless sensing method in the robotic world.

A number of gaps in literature were also identified. First, there is the need to conduct further studies on how to optimize the operations of robots that lack sensors. Future studies should also focus on how to enlarge the operational range of rescue robots through the use of reliable and wide range wireless networks, as well as advance the current semiautonomy status of robots to full autonomy. Lastly, there is the need to develop sophisticated motion control system that will enable robots to maneuver and transport victims out of disaster zones.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to express their gratitude to BART LAB Rescue Robot team members and BART LAB Researchers for their kind support and also for assembling a BART LAB rescue robot platform to implement in this work. This research is partially supported by the National Research University Grant through Mahidol University, Thailand, and Grant from Office of the Higher Education Commission (OHEC), Thai Military R&D 2017.

References

  1. K. Lembke, Ł. Kietliński, M. Golański, and R. Schoeneich, “RoboMote: Mobile autonomous hardware platform for wireless ad-hoc sensor networks,” in Proceedings of the 2011 IEEE International Symposium on Industrial Electronics, ISIE 2011, pp. 940–944, Gdansk, Poland, June 2011. View at Scopus
  2. P. Vartholomeos, K. Vlachos, and E. Papadopoulos, “Analysis and motion control of a centrifugal-force microrobotic platform,” IEEE Transactions on Automation Science and Engineering, vol. 10, no. 3, pp. 545–553, 2013. View at Publisher · View at Google Scholar · View at Scopus
  3. A. W. Mahoney and J. J. Abbott, “Five-degree-of-freedom manipulation of an untethered magnetic device in fluid using a single permanent magnet with application in stomach capsule endoscopy,” International Journal of Robotics Research, vol. 35, no. 1-3, pp. 129–147, 2016. View at Publisher · View at Google Scholar
  4. R. Fearing, “Challenges for efffective millirobots,” in Proceedings of the 2006 IEEE International Symposium on MicroNanoMechanical and Human Science, pp. 1–5, IEEE, Nagoya, Japan, November 2006. View at Publisher · View at Google Scholar
  5. Z. Kashino, J. Y. Kim, G. Nejat, and B. Benhabib, “Spatiotemporal adaptive optimization of a static-sensor network via a non-parametric estimation of target location likelihood,” IEEE Sensors Journal, vol. 17, no. 5, pp. 1479–1492, 2017. View at Publisher · View at Google Scholar · View at Scopus
  6. S. Bergbreiter, “Effective and efficient locomotion for millimeter-sized microrobots,” in Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4030–4035, Nice, France, September 2008. View at Publisher · View at Google Scholar
  7. A. Macwan, J. Vilela, G. Nejat, and B. Benhabib, “A multirobot path-planning strategy for autonomous wilderness search and rescue,” IEEE Transactions on Cybernetics, vol. 45, no. 9, pp. 1784–1797, 2015. View at Publisher · View at Google Scholar · View at Scopus
  8. C. Sosa-Cervantes, R. Silva-Ortigoza, C. Marquez-Sanchez, H. Taud, and G. Saldana-Gonzalez, “Trajectory tracking task in wheeled mobile robots: a review,” in Proceedings of the 2014 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE), pp. 110–115, Cuernavaca, Mexico, November 2014. View at Publisher · View at Google Scholar
  9. S. G. Tzafestas, Introduction to Mobile Robot Control, Elsevier, Oxford, UK, 2013, https://www.elsevier.com/books/introduction-to-mobile-robot-control/tzafestas/978-0-12-417049-0.
  10. D. Kim and J. Oh, “Tracking control of a two-wheeled mobile robot using input-output linearization,” Control Engineering Practice, vol. 7, no. 3, pp. 369–373, 1999. View at Publisher · View at Google Scholar · View at Scopus
  11. M. Mendili and F. Bouani, “Predictive control of mobile robot using kinematic and dynamic models,” Journal of Control Science and Engineering, vol. 2017, Article ID 5341381, 11 pages, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  12. A. Bessas, A. Benalia, and F. Boudjema, “Integral sliding mode control for trajectory tracking of wheeled mobile robot in presence of uncertainties,” Journal of Control Science and Engineering, vol. 2016, Article ID 7915375, 10 pages, 2016. View at Publisher · View at Google Scholar
  13. R. Sheh, A. Jacoff, A. M. Virts et al., “Advancing the state of urban search and rescue robotics through the robocup rescue robot league competition,” in Field and Service Robotics, Series: Springer Tracts in Advanced Robotics, K. Yoshida and S. Tadokoro, Eds., vol. 92, pp. 127–142, Springer-Verlag, Berlin, Germany, 2014. View at Google Scholar
  14. M. W. Spong and M. Fujita, “Control in robotics,” IEEE Control Systems Society, vol. 10, no. 2, pp. 1–25, 2011. View at Google Scholar
  15. A. Aouf, L. Boussaid, and A. Sakly, “Same fuzzy logic controller for two-wheeled mobile robot navigation in strange environments,” Journal of Robotics, vol. 2019, Article ID 2465219, 11 pages, 2019. View at Publisher · View at Google Scholar
  16. A. Birk, S. Schwertfeger, and K. Pathak, “A networking framework for teleoperation in safety, security, and rescue robotics,” IEEE Wireless Communications Magazine, vol. 16, no. 1, pp. 6–13, 2009. View at Publisher · View at Google Scholar · View at Scopus
  17. B. R. Markiewicz, “Analysis of the computed torque drive method and comparison with conventional position servo for a computer-controlled manipulator,” NASA-JPL Technical Memo, 1973, https://ntrs.nasa.gov/search.jsp. View at Google Scholar
  18. M. W. Spong and M. Vidyasagar, “Robust linear compensator design for nonlinear robotic control,” IEEE Journal on Robotics and Automation, vol. 3, no. 4, pp. 345–351, 1987. View at Publisher · View at Google Scholar · View at Scopus
  19. J.-J. E. Slotine and W. Li, “On the adaptive control of robot manipulators,” International Journal of Robotics Research, vol. 6, no. 3, pp. 147–157, 1987. View at Publisher · View at Google Scholar · View at Scopus
  20. R. Ortega and M. W. Spong, “Adaptive control of robot manipulators: A tutorial,” in Proceedings of the 27th IEEE Conference on Decision and Control, pp. 1575–1584, Austin, TX, USA, 1989, https://doi.org/10.1109/CDC.1988.
  21. Y. Li, “Stabilization of teleoperation systems with communication delays: An IMC approach,” Journal of Robotics, vol. 2018, Article ID 1018086, 9 pages, 2018. View at Publisher · View at Google Scholar
  22. R. W. Brockett, “Asymptotic stability and feedback stabilization,” in Differential Geometric Control Theory, R. W. Brockett, R. S. Millman, and H. J. Sussmann, Eds., vol. 27, pp. 181–191, Birkhäuser, Boston, Mass, USA, 1983. View at Google Scholar · View at MathSciNet
  23. J. Heinzmann and A. Zelinsky, “Quantitative safety guarantees for physical human-robot interaction,” International Journal of Robotics Research, vol. 22, no. 7-8, pp. 479–504, 2016. View at Publisher · View at Google Scholar
  24. P. Jimenez, F. Thomas, and C. Torras, “Collision detection algorithms for motion planning,” in Robot Motion Planning and Control, vol. 229 of Lect. Notes Control Inf. Sci., pp. 305–343, Springer, London, UK, 1998. View at Publisher · View at Google Scholar · View at MathSciNet
  25. G. Hirzinger, A. Albu-Schaffer, M. Hahnle, I. Schaefer, and N. Sporer, “On a new generation of torque controlled light-weight robots,” in Proceedings of the 2001 ICRA. IEEE International Conference on Robotics and Automation, pp. 3356–3363, Seoul, South Korea, 2001. View at Publisher · View at Google Scholar
  26. D. Ebert and D. Henrich, “Safe human-robot-cooperation: image-based collision detection for industrial robots,” in Proceedings of the IROS 2002: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1826–1831, Lausanne, Switzerland, 2002. View at Publisher · View at Google Scholar
  27. A. García, V. Feliu, and J. A. Somolinos, “Experimental testing of a gauge based collision detection mechanism for a new three-degree-of-freedom flexible robot,” Journal of Robotic Systems, vol. 20, no. 6, pp. 271–284, 2003. View at Publisher · View at Google Scholar · View at Scopus
  28. A. Bicchi and G. Tonietti, “Dealing with the safety-performance tradeoff in robot arms design and control,” IEEE Robotics & Automation Magazine, vol. 11, no. 2, pp. 22–33, 2004. View at Publisher · View at Google Scholar
  29. C. Mitsantisuk, K. Ohishi, and S. Katsura, “Estimation of action/reaction forces for the bilateral control using Kalman filter,” IEEE Transactions on Industrial Electronics, vol. 59, no. 11, pp. 4383–4393, 2012. View at Publisher · View at Google Scholar · View at Scopus
  30. S. Chen, M. Luo, and F. He, “A universal algorithm for sensorless collision detection of robot actuator faults,” Advances in Mechanical Engineering, vol. 10, no. 1, pp. 1–10, 2018. View at Publisher · View at Google Scholar
  31. R. Al-Jarrah, M. Al-Jarrah, and H. Roth, “A novel edge detection algorithm for mobile robot path planning,” Journal of Robotics, vol. 2018, Article ID 1969834, 12 pages, 2018. View at Publisher · View at Google Scholar
  32. R. Mattone and A. De Luca, “Conditions for detecting and isolating sets of faults in nonlinear systems,” in Proceedings of the 44th IEEE Conference on Decision and Control, pp. 1005–1010, Seville, Spain, 2005. View at Publisher · View at Google Scholar
  33. M. Capurso, M. M. Ardakani, R. Johansson, A. Robertsson, and P. Rocco, “Sensorless kinesthetic teaching of robotic manipulators assisted by observer-based force control,” in Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 945–950, Singapore, May 2017. View at Publisher · View at Google Scholar
  34. Y. Tian, Z. Chen, T. Jia, A. Wang, and L. Li, “Sensorless collision detection and contact force estimation for collaborative robots based on torque observer,” in Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 946–951, Qingdao, China, December 2016. View at Publisher · View at Google Scholar
  35. A. Stolt, F. B. Carlson, M. M. Ardakani, I. Lundberg, A. Robertsson, and R. Johansson, “Sensorless friction-compensated passive lead-through programming for industrial robots,” in Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3530–3537, Hamburg, Germany, September 2015. View at Publisher · View at Google Scholar
  36. N. Ruangpayoongsak, H. Roth, and J. Chudoba, “Mobile robots for search and rescue,” in Proceedings of the IEEE International Workshop on Safety, Security and Rescue Robotics, pp. 211–216, Kobe, Japan, 2005. View at Publisher · View at Google Scholar
  37. J. Zhao, J. Gao, F. Zhao, and Y. Liu, “A search-and-rescue robot system for remotely sensing the underground coal mine environment,” Sensors, vol. 17, no. 10, p. 2426, 2017. View at Publisher · View at Google Scholar
  38. K. Ito and H. Maruyama, “Semi-autonomous serially connected multi-crawler robot for search and rescue,” Advanced Robotics, vol. 30, no. 7, pp. 489–503, 2016. View at Publisher · View at Google Scholar · View at Scopus
  39. A. Mourikis, N. Trawny, S. Roumeliotis, D. Helmick, and L. Matthies, “Autonomous stair climbing for tracked vehicles,” The International Journal of Robotics Research, vol. 26, no. 7, pp. 737–758, 2007. View at Google Scholar
  40. S. Steplight, G. Egnal, S. Jung, D. Walker, C. Taylor, and J. Ostrowski, “A mode-based sensor fusion approach to robotic stair-climbing,” in Proceedings of the 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000), pp. 1113–1118, Takamatsu, Japan, 2000. View at Publisher · View at Google Scholar
  41. Y. Okada, K. Nagatani, K. Yoshida, S. Tadokoro, T. Yoshida, and E. Koyanagi, “Shared autonomy system for tracked vehicles on rough terrain based on continuous three-dimensional terrain scanning,” Journal of Field Robotics, vol. 28, no. 6, pp. 875–893, 2011. View at Publisher · View at Google Scholar · View at Scopus
  42. K. Ito, Z. Yang, K. Saijo, K. Hirotsune, A. Gofuku, and F. Matsuno, “A rescue robot system for collecting information designed for ease of use - A proposal of a rescue systems concept,” Advanced Robotics, vol. 19, no. 3, pp. 249–272, 2005. View at Publisher · View at Google Scholar · View at Scopus
  43. J. Suthakorn, S. Ongwattanakul, N. Nillahoot et al., “Team description paper-bart lab rescue robotics team (Thailand),” in Proceedings of the Word Robot Summit, Tokyo, Japan, 2018, http://worldrobotsummit.org/en/wrs2018/.
  44. M. B. Pillai, S. Nakdhamabhorn, K. Borvorntanajanya, and J. Suthakorn, “Enforced acceleration control for DC actuated rescue robot,” in Proceedings of the 2016 XXII International Conference on Electrical Machines (ICEM), pp. 2640–2648, Lausanne, Switzerland, September 2016. View at Publisher · View at Google Scholar
  45. Y. Wang, P. Wang, Z. Li, and H. Chen, “Observer-based controller design for a class of nonlinear networked control systems with random time-delays modeled by markov chains,” Journal of Control Science and Engineering, vol. 2017, Article ID 1523825, 13 pages, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  46. B. M. Pillai and J. Suthakorn, “Motion control applications: observer based DC motor parameters estimation for novices,” International Journal of Power Electronics and Drive Systems (IJPEDS), vol. 10, no. 1, pp. 195–201, 2019. View at Publisher · View at Google Scholar
  47. ASTM E2801-11, “Standard test method for evaluating emergency response robot capabilities: mobility: confined area obstacles: gaps,” 2011, https://www.astm.org/Standards/E2801.htm.
  48. J. Suthakorn, S. Shah, S. Jantarajit et al., “On the design and development of a rough terrain robot for rescue missions,” in Proceedings of the 2008 IEEE International Conference on Robotics and Biomimetics, pp. 1830–1835, Bangkok, Thailand, Feburary 2009. View at Publisher · View at Google Scholar
  49. K. Ohno, S. Morimura, S. Tadokoro, E. Koyanagi, and T. Yoshida, “Semi-autonomous control system of rescue crawler robot having flippers for getting over unknown-steps,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '07), pp. 3012–3018, San Diego, Claif, USA, November 2007. View at Publisher · View at Google Scholar · View at Scopus
  50. T. Yoshida, K. Nagatani, S. Tadokoro, T. Nishimura, and E. Koyanagi, “Improvements to the rescue robot quince toward future indoor surveillance missions in the fukushima daiichi nuclear power plant,” in Field and Service Robotics, vol. 92 of Springer Tracts in Advanced Robotics, pp. 19–32, Springer, Berlin, Germany, 2014. View at Publisher · View at Google Scholar
  51. S. Kohlbrecher, J. Meyer, O. von Stryk, and U. Klingauf, “A flexible and scalable slam system with full 3d motion estimation,” in Proceedings of the IEEE International Symposium on Safety, Security and Rescue Robotics, pp. 155–160, Kyoto, Japan, 2011.
  52. H. Kitano, S. Tadokoro, I. Noda et al., “RoboCup Rescue: search and rescue in large-scale disasters as a domain for autonomous agents research,” in Proceedings of the IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics, pp. 739–743, Tokyo, Japan, 1999. View at Publisher · View at Google Scholar
  53. T. Takahashi, I. Takeuchi, T. Koto, S. Tadokoro, and I. Noda, “RoboCup-rescue disaster simulator architecture,” in RoboCup-97: Robot Soccer World Cup I, P. Stone, T. Balch, and G. Kraetzschmar, Eds., vol. 1395 of Lecture Notes in Computer Science, pp. 379–384, Springer, Berlin, Germany, 1998. View at Publisher · View at Google Scholar
  54. H. Yasushi and T. Osamu, “Development of communication technology for search and rescue robots,” Journal of the National Institute of Information and Communications Technology, vol. 58, no. 1, pp. 131–151, 2011. View at Google Scholar
  55. H. Sugiyama, T. Tsujioka, and M. Murata, “Coordination of rescue robots for real-time exploration over disaster areas,” in Proceedings of the 2008 11th IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing, pp. 170–177, Orlando, FL, USA, May 2008. View at Publisher · View at Google Scholar
  56. Y. Zhong, J. Xiao, H. Lu, and H. Zhang, “Real-time terrain classification for rescue robot based on extreme learning machine,” in Proceedings of the Cognitive Systems and Signal Processing. ICCSIP 2016, F. Sun, H. Liu, and D. Hu, Eds., pp. 385–397, Springer, 2016.
  57. H. Igarashi, T. Kimura, and F. Matsuno, “Risk management method of demonstrative experiments for mobile robots in outdoor public space,” Journal of the Robotics Society of Japan, vol. 32, no. 5, pp. 473–480, 2014. View at Publisher · View at Google Scholar
  58. R. Hanai, K. Harada, I. Hara, and N. Ando, “Design of robot programming software for the systematic reuse of teaching data including environment model,” ROBOMECH Journal, vol. 5, no. 1, 2018. View at Publisher · View at Google Scholar
  59. J. Poppinga, A. Birk, and K. Pathak, “Hough based terrain classification for realtime detection of drivable ground,” Journal of Field Robotics, vol. 25, no. 1-2, pp. 67–88, 2008. View at Publisher · View at Google Scholar · View at Scopus
  60. R. R. Murphy and A. Kleiner, “A community-driven roadmap for the adoption of safety security and rescue robots,” in Proceedings of the 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 1–5, Linkoping, Sweden, October 2013. View at Publisher · View at Google Scholar