Table of Contents Author Guidelines Submit a Manuscript
Journal of Robotics
Volume 2015 (2015), Article ID 318695, 14 pages
Research Article

Field Results of a Navigation Architecture with Timing Constraints

1Department of Industrial Electronics, University of Minho, 4804-533 Guimaraes, Portugal
2ISR, Instituto Superior Tecnico, 1049-001 Lisboa, Portugal

Received 23 June 2015; Accepted 28 September 2015

Academic Editor: Yangmin Li

Copyright © 2015 Jorge Silva et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This paper presents results of field tests of a mobile robot controlled by a navigation architecture accounting for timing constraints in an indoor environment. Dependability properties characterize the effects of disturbances on the ability to successfully accomplish any assigned missions, described in terms of the stability of an equilibrium state identified with a goal location. The stability is analyzed using Contraction theory. A localization system based on artificial landmarks is used to obtain location estimates that enable the robot to autonomously cover large distances. Monte Carlo tests assess the architecture under different real environment conditions including recovering from disturbing events such as landmark detection failures, robot kidnapping, unexpected collisions, and changes in the density of obstacles in the environment. Tests include long-run missions of around 2900 m lasting for 4.5 hours.

1. Introduction

Despite the great importance of stability theory to design control architectures for mobile robots, there is no extensive research addressing the stability of control architectures when robots move in real environments. This paper focuses on verifying the stability of a navigation architecture with timing constraints through the Contraction theory in a realistic indoor environment where the robot performs long term missions.

The architecture controls the mobile robot that should travel between locations within a specific movement time frame, . The movements of the robot are automatically adjusted to compensate for possible disturbances. If the robot is taking too long to complete a mission, the velocity is increased to compensate for the delay. This temporal stabilization is important to handle timed action sequences. For instance, if the robot has to complete a sequence of missions, the next mission is only initiated after the previous one has been terminated. An overview of the temporal stabilization mechanism can be viewed in [1]. Temporal stabilization is not limited to mobile navigation and other robotic frameworks have addressed temporal stabilization of movements to perform rhythmic actions, such as drumming [2], juggling [3], and human-robot interaction scenarios [4, 5].

The architecture is based on nonlinear dynamical systems and feedthrough maps connected in a mesh and organized in four modules, namely, Motion (global) Control, Local Control, Timing Control, and Localization System. The first two modules are in charge of guiding the robot from its initial position to the goal position where the mission ends. The Motion module considers the map of the environment to generate the shortest path that enables the robot to complete the mission. The Local module takes into account sensory information as the robot avoids obstacles while following the shortest path. The Timing module is responsible for ensuring that the timing constraints for the mission are accomplished and the Localization System module provides at each instant of time estimates of the robot’s pose.

Contraction theory provides a set of tools to analyze the convergence behavior of nonlinear systems. The Contraction analysis is different from the classical point of view of stability [68], as stability is not viewed relative to some equilibrium point. Contraction is an incremental form of stability; that is, the focus is on stability of trajectories with respect to one another [9], besides being attracted towards an equilibrium position. Incremental stability is useful for analyzing the robustness and performance properties of nonlinear closed-loop systems and if the system holds suitable steady-state properties [10, 11].

As an alternative to Lyapunov theory [12, 13], Contraction theory establishes exponential stability of systems [6] without having to select an energy-like Lyapunov function. Additionally, it explicitly incorporates the control input in the process of stability analysis [14], which is similar to the Input-to-State Stability (ISS) [15]. Systems that are universally contracting with respect to the inputs are stable with respect to Input-to-State Stability [14]. Contraction theory studies stability when external perturbations (noise and uncertainty in the control of the system) never put the system out of contraction bounds. This implies that external perturbations have no effect on the global exponential convergence, and this is an important aspect when dealing with real-time applications.

Lyapunov functions have been used to prove formal stability analysis of navigation architectures [1618]. However, in these architectures global path planning is not considered and only local navigation purposes are addressed. On the other hand, architectures considering local and global navigation purposes did not attempt to derive formal conditions under which the architecture is considered stable [1922]. Moreover, the inclusion of timing constraints (temporal stabilization) has not been considered for the mission success. Concepts from nonlinear analysis and control were used by [23] in an architecture based in dynamical systems to obtain necessary conditions for mission success in terms of stability concepts. However, global navigation was not addressed, which limits the architecture to deal with more complex missions. The approach in [24] considered the global navigation ability in the architecture and successfully demonstrated the stability of the architecture in missions under a Webots simulation environment [25]. Mission success is identified with the stability of the single fixed point of the architecture.

This paper is a follow-up of previous work [23, 24], where concepts from Contraction theory were used to derive conditions under which a navigation architecture based on a mesh of dynamical systems and feedthrough maps has a stable fixed point reachable in practical terms. Further, a stability condition based on Contraction theory verifies the ability of the robot to successfully complete its mission. The paper focuses on assessing the previously obtained theoretical results through long-run experiments. As novelties, this paper extends the duration and complexity of those missions to a real environment where people and other obstacles share the environment with the robot, while verifying the stability of the architecture. Further, the timing constraints are considered for the stability analysis to verify if the mission is successfully completed within the movement time frame, . This is achieved through a performance timing index that assesses the ability of the robot to deal with the timing constraints of the mission. The stability analysis in the paper can be viewed as an a priori roadmap to system design and update, which can be extended to a wide range of control architectures.

The performance of the architecture is assessed via Monte Carlo tests that include the following: (i) long-run missions, in which the architecture should produce smooth and uninterrupted trajectories for the robot over long distances without human supervision; (ii) missions to verify the performance of the Localization System module, concerning accuracy, precision, and convergence time; (iii) missions to verify the robot dependability through a set of features commonly accepted, which include safety, integrity, and reliability (see, e.g., [26, 27]).

The tests were carried out in a typical university indoor environment, including labs and public spaces, for example, corridors and halls, using a Pioneer 3-DX robot. The estimates of the robot’s pose were obtained by combining the detection of visual landmarks with odometry through an Extended Kalman Filter (EKF).

The paper is organized as follows. Section 2 reviews the global system and addresses the stability of the navigation architecture using Contraction theory. Section 3 details the experimental setup that includes the localization system of the robot. Section 4 addresses the dependability of the architecture through Monte Carlo tests by considering most of the features commonly included in the property. Section 5 depicts examples of field missions under different disturbance levels. Section 6 concludes the paper with a brief remark about the evolution of this architecture.

2. The Global System under Contraction Theory

This section briefly reviews the global system described in [24]. The basis concepts in Contraction theory, its relation to the stability of the global system, and its good properties, namely, the mission success, are also presented.

The global system can be represented by three maps (see Figure 1). stands for the robot, stands for the architecture that controls the robot, and represents the environment. The environment is assumed neutral so that knowledge on is not relevant to design the block.

Figure 1: A graphical representation of the global system.

Maps and are assumed as discrete dynamical systems (current robotic systems are computer based and hence intrinsically discrete) of the formwhere the input and output perturbations are, respectively, implicit in errors and interactions . stands for the goal location where the mission ends and represents commands that trigger actions in the robot. The assumption can be reasonable. In practical situations, or even discontinuous maps can be reasonably replaced by smooth, or at least , maps using splines (see, e.g., [28] or for other regularization methods [29, 30]).

The loop in Figure 1 can be viewed as an iterative process, where a supervisor feeds a robot with its state which in turn affects the state of the supervisor and feeds it back to the supervisor (through the environment). Without losing generality it can be assumed that the supervisor observes error signals from the output of the environment. If a mission is completed successfully, the error signals tend to zero since the fixed point that identifies the mission success is a global point of convergence and other local minima do not occur in the region of interest. Thus, the fixed point lies in the null space of . From a conceptual point of view this means that it is assumed that the environment does not fool the supervisor, for example, by having a null space that does not include the mission success, thus making the robot stop before having completed the mission.

The mission is successful if the dynamical systems of the global system (Figure 1) are contracting maps and converge to a fixed point of the open loop map within . This means that the robot reaches the goal location within .

Dynamical systems are the basis for the navigation architecture due to their intrinsic properties. The small number of configuration parameters in such systems tends to reduce the dimensionality of the control problem [4]. They are amenable to formal analysis, namely, when considering stability conditions in real robots. Dynamical systems are a commonly used framework to handle timing constraints [5, 31, 32], while providing flexible and responsive behaviors [2, 33]. Finally, they also allow continuous coupling to sensory information and offer robustness against small perturbations.

2.1. Contraction Theory

One of the most used concepts to prove the stability of nonlinear dynamical systems is the Contraction theory [6]. If a map is a contraction (Lipschitz continuous) with respect to a subset of the Euclidean space , then has a fixed point; that is, . Following Theorem   in [34], a Lipschitz continuous differentiable map is a contraction ifwhere refers to the Euclidean norm and refers to the Jacobian matrix.

The fixed point of a contraction is an interesting solution since it depends smoothly on the contraction and simultaneously presents asymptotic stability in cases of an attractive behavior [6]. Moreover, the fixed point is stable under perturbations of the map and this is an important robustness property for control architectures in general (see Propositions and in [34] for perturbations on fixed points). Furthermore, when the map represents a closed-loop process, as in an iterative process , the fixed point can be identified with the mission success; for example, the fixed point is identified with a goal state.

2.2. Stability and Mission Success

Figure 2 illustrates the global system represented in Figure 1 and details the architecture () composed of the modules Motion Control, Local Control, and Timing Control. The Motion Control receives the map of the environment and the position of the robot to generate the goal direction that the robot should follow. This direction is received by the Local Control that verifies, based on the position of obstacles, if the goal direction can be followed by the robot. This module calculates the angular velocity for the robot. The Timing Control generates the linear velocity for the robot to accomplish the mission within , which is sent by the user. More details about these modules are described in [24]. Sensory information indicates to this module if the robot is in time or delayed to complete the mission. The Temporal Constraint module verifies if the robot is able to reach the mission within , and it is explained in the next section. The Localization System estimates the position of the robot based on visual information and odometry.

Figure 2: View of the mesh of systems that includes a detailed view of the Motion Control.

The global system is the composition of the blocks and , with the aforementioned assumptions on differentiability and environment neutrality. The composition function is considered as a contraction if it verifies condition (2),Using the differentiation rule, the left-hand term in (3) can be written as (assuming norm compatibility, see, e.g., [35])Using (4) in (3),For a generic robot (), for which there is no information of its structure, it can be reasonably assumed that (5) holds forand ensures the contraction of the global system and hence the existence of a fixed point, independently of the model of the robot. This is a weak condition for mission success but nevertheless it represents an important design tool.

The internal organization of a global system composed only of dynamical systems is not required to prove its contraction. According to [36], if the dynamical systems are somehow connected to each other, through a parallel, feedback, or hierarchical combination, the global system remains contracting if each individual dynamical system is contracting. In fact, Contraction theory can be used to derive sufficient conditions for a mesh of coupled nonlinear systems, since contraction is automatically preserved through the variety of system combinations [37].

These contraction properties can be extended for our global system that is composed of a mesh of coupled nonlinear dynamical systems and feedthrough maps, . Our global system is a contraction if (i) the dynamical systems and feedthrough maps are somehow connected; (ii) the dynamical systems are contracting; (iii) the feedthrough maps are upper bounded; and (iv) the Jacobian norms of the dynamical systems when composed of the feedthrough maps (that can have upper bounds greater than 1) result in a value less than 1. Otherwise, the contraction of the global system is not verified. Thus, by following the combination of contracting systems’ property [37], the global system illustrated in Figure 2 can be simply represented by a sequence of modules (see Figure 2) and is

2.3. Constraining the Mission Duration

For the purpose of the paper, the temporal constraint is an upper bound on the amount of time in which the robot has to complete the assigned mission; that is, the robot should reach the goal location within the time frame . Bounding the time in which the robot has to complete the mission requires that the global system converges to the fixed point fast enough.

A dynamical system with initial condition and final condition is said to verify a temporal constraint if, where are command inputs of the system and is a circular neighborhood of radius centered at . A feedthrough map that verifies if the mission is completed within the temporal constraint is described as follows:where defines the rate of transition and is suitably selected. returns approximately 1 when the mission is in time, , and a value sufficiently large to set when the mission is finished in . Consequently, condition (6) no longer holds and the mission is considered failed as the fixed point of the global system is no longer stable. The same representation as in (7) is used to include the timing constraint into the architecture,It must be noted that bounds such as (10) do not embed information on the interconnection among the blocks. In a sense, this corresponds to assuming that each block, specially , is meaningfully interconnected to others such that will work towards completing the mission within . In fact, the Timing Control module in Figure 2 already embeds blocks that modify the velocity and make increase. However, this increase may not be large enough leading to small (a typical example is saturation). In such cases it is necessary to use a performance timing index, such as , that observes the system in greater detail.

Including additional constraints in the architecture can be easily done using the above principle. This shows the usefulness that Contraction theory and in particular the combination of contracting systems provides to the stability analysis of complex real-time control architectures.

3. Experimental Setup

Figure 3(a) illustrates the robot Pioneer 3-DX used in the experiments, with an onboard PSEye camera (pointing upwards) to detect the landmarks, a 2.4 GHz laptop to run the control architecture, and a Hokuyo laser range finder to avoid obstacles. Laser readings, odometry, and visual information are acquired at each 100 ms while the dynamical systems and feedthrough maps are calculated at approximately each 20 ms.

Figure 3: Robot used during the experiments and snapshot illustrating part of the ceiling of the environment where artificial landmarks are placed (captured by the camera oriented upwards).

The Localization System uses the internal encoders for odometry and the visual information to recognize artificial landmarks placed in the ceiling of the environment (see Figure 3(b)). The fusion of both sources is realized through an Extended Kalman Filter (EKF). Distinguishable landmarks placed in the environment and used as references to the robot’s sensors present ambiguities in pose estimation. The localization system uses the ARtoolkit framework [38] to recognize the landmarks and select the landmark with the highest degree of confidence to estimate the robot’s pose. Figure 3(b) depicts a snapshot of a part of the ceiling obtained by the robot’s camera in which 3 landmarks are detected and 1 is not detected.

The landmark placement results from an optimization process constrained to preserving the visibility, at all times, of at least 1 landmark. This approach is similar to the one adopted in [39] (see also [40, 41] for alternatives).

The minimum number of landmarks required to ensure that at least 1 landmark is visible by the robot depends on (i) the camera’s field of view and (ii) the distance between the camera and the ceiling. These constraints limit the detection area of the camera into the ceiling, in which one landmark should be inside this area in order to be visible. The larger the detection area, the smaller the number of required landmarks. Considering that the camera is mounted on the robot’s top (see Figure 3(a)), at a distance of approximately 2.8 m from the ceiling, the detection area is a square with side m (see Figure 3(b)).

Monte Carlo tests assess the performance of the localization system by analyzing the accuracy, precision, and convergence time. The accuracy of the localization system was obtained from a set of 20 experiments. In each of them the robot is stopped at random locations and the estimates were compared to the ground truth. Table 1 summarizes the results of these experiments, with errors , , standing, respectively, for the error on and coordinates and orientation of the robot. The localization system presents good accuracy but the precision is poor. In very cluttered environments with small free spaces, poor precision can increase the percentage of mission failures. Nonetheless, these values are similar to current UWB standalone indoor localization systems [42, 43] and to systems combining odometry, laser range measurements, and a priori known maps [44].

Table 1: Localization system accuracy.

The convergence rate of the robot’s localization was also obtained from the set of 20 field experiments. The EKF is initialized at a random location in each run. As the coordinates of the random locations are known, the robot’s pose is compared to the ground truth after the convergence of the localization system has been achieved. Figure 4 shows the error in the robot’s pose along the time. It takes approximately 5 s for the robot’s pose to converge to approximately 0.16 m in , coordinates. Orientation converges to 0.05 rad in approximately 1 s.

Figure 4: Precision versus time: convergence of the localization system. Blue line depicts the average error of the 20 experiments.

4. System Dependability

Dependability is mostly used in the software development/usage contexts and can be identified with a measure of the robustness of the system to unexpected conditions or failures [45]. Some literature already addressed dependability in human-robot interaction contexts on a physical basis (see, e.g., [26, 45]). Even though the contraction analysis suggests that the whole system is dependable, field trials allow a realistic assessment.

In robotics, dependability involves physical safety and operational robustness including reliability and integrity. Some of these factors can overlap as their respective measures carry a certain degree of subjectivity. The dependability approach in this paper is oriented towards high-level aspects of a full architecture.

Integrity assesses the ability of the localization system to recover from unexpected situations, for example, (i) collisions, (ii) landmark failures, and (iii) kidnapped robot scenarios. Reliability is the probability of successfully completed missions, that is, the ratio between the numbers of successful and unsuccessful missions over a wide variety of missions. Safety provides the number of possible collisions with obstacles that the robot can cause over time. Completing missions in a short amount of time or a highly populated environment is an aspect that increases the complexity of the mission and must be considered to measure the reliability and safety attributes. Results concerning reliability and safety are shown in Section 5. Monte Carlo tests consisting of repeating several runs under different conditions were considered for each dependability feature.

4.1. Integrity

In order to evaluate the ability of the robot to recover from unexpected situations, several experiments were conducted to expose the robot to the following failures: (i) collisions deliberately caused with the robot, (ii) variations on the percentage of detected landmarks, and (iii) how the Localization System deals with the kidnapping problem [46, 47].

4.1.1. Collisions

Collisions with obstacles may occur when the robot moves in highly populated environments as people may inadvertently bump the robot. During a mission lasting 500 s and covering 100 m, some people intentionally collided with the robot by bumping its front and forcing it to rotate and find a new trajectory. Results of the robot’s velocity and estimations of the robot’s pose are illustrated in Figures 5 and 6, respectively.

Figure 5: (a) Minimum distance measured by the laser mounted on the robot (blue line); red dashed line identifies 0.25 m and green circles represent situations in which collisions occurred. (b) Reference velocity generated by the Timing Control (blue line) and vertical dashed green lines indicating the collisions.
Figure 6: Estimation of the robot’s pose (blue line), data of the robot’s pose acquired by the camera (black circles), and instants of time in which collisions occurred (green dashed lines).

The continuous blue line in Figure 5(a) shows the minimum distance between the robot and obstacles measured by the laser. Whenever this distance lies below 0.25 m (identified by the dashed red line) a collision between the robot and an obstacle occurred. Thirteen intentional collisions occurred during the mission (green circles). Figure 5(b) illustrates the velocity of the robot (continuous blue line) and vertical dashed green lines depict the moment where collisions occurred. Despite the collisions between the robot and obstacles, the Timing Control module manages to generate the suitable velocity reference as the robot completes the mission in the specified .

Figure 6 illustrates the robot’s pose (blue line) () when collisions with obstacles occur (green dashed lines). The data provided by the camera mounted on the robot about its pose (black circles) () continues to be correctly acquired after the collisions, and the degradation of the localization accuracy is not relevant. Although the robot had to rotate and find a new trajectory to follow after a collision occurred, the Localization System was able to keep the robot within a suitable localization uncertainty for navigation.

4.1.2. Landmarks Detection Failures

During missions it is reasonable to expect that some landmarks will not be detected, for example, due to variations in the lighting conditions. The effect of nondetected landmarks in the localization uncertainty was verified by modifying the available landmarks during the experiments. For each experiment, a percentage of landmarks were wittingly removed, namely, using , , , , and of the full set of landmarks. In each experiment, the robot performed a sequence of single missions for approximately 300 m. In the sequence of missions, when the robot reaches a goal location, it immediately assumes a new goal and moves towards it.

Figure 7 shows the obtained results. The percentage of available landmarks has an impact on the localization uncertainty; that is, the higher the percentage of available landmarks, the lower the uncertainty level. In normal conditions, in which of the landmarks are available for detection, the results are similar to the ones presented in Table 1.

Figure 7: Uncertainty of the localization system when the percentage of detected landmarks changes. Green circles represent the average error, red lines represent the median, black dotted lines represent the minimum and maximum values, and blue rectangles represent the 25%–75% percentile.

The robot successfully completed the sequence of missions even when the number of available landmarks is reduced to , , and , albeit with larger uncertainty values. When no landmarks were available () the robot was unable to reset its position based on visual information and therefore unable to complete the sequence of missions. In fact, the robot got lost after moving a few meters.

4.1.3. Robot Kidnapping

The kidnapping tests assess the ability of the robot to recover from catastrophic localization failures. These tests were realized to verify if the Localization System can handle these failures by recovering the estimates of the robot’s pose.

Initially, the robot is in a known position and remains stopped for 10 s to ensure the convergence of the robot’s localization. The robot is then moved to another position and has no information about its new position during a period of time. This time is sufficient for the robot to get lost when no landmarks are detected. This test is repeated for 10 runs and the average uncertainty between the robot’s true position and the estimated robot’s position after the kidnapping is 4 m. Figure 8 shows the average uncertainty relative to , coordinates and orientation in the kidnapping problem obtained from the 10 runs. The green area represents the average interval of time in which the robot is being kidnapped. When visual information is available again, the , coordinates of the robot’s position converge to the expected uncertainty (0.16 m according to Table 1) in approximately 5 s (see Figure 8(b), which is zoomed during the interval time of convergence, ). The robot’s heading direction, , converges to approximately 0.05 rad. These values of convergence are consistent with those in Table 1 and suggest that the Localization System is able to handle catastrophic localization problems.

Figure 8: Uncertainty versus time: convergence of the localization system when the robot is kidnapped during an interval of time (green area).

5. Field Experiments

This section presents a set of experiments to illustrate that stability condition (6), , holds while the robot is performing a set of long term missions. If this condition is verified during a mission, then the robot reached the goal location within the timing constraint . In cases that stability condition (6) is not verified, the global system is not stable because the robot did not reach the goal location or reached it after . If the robot verifies that it can not reach the goal location within , the robot adopts a constant velocity (0.1 ms−1) and keeps moving towards the goal location. These situations occur, for instance, in a very populated environment, in which obstacles delay the robot. The maximum velocity of the robot is limited by its physical constraints to 0.8 ms−1.

At the beginning of each mission, the map of the environment, the goal position , and movement time frame are provided to the robot.

The reliability of the architecture is calculated by the ratio between successful and unsuccessful missions. A mission is successful if the robot reaches the neighborhood of with radius within the assigned . According to the values in Table 1, m is a suitable distance that allows the robot to stop safely and close to despite the localization uncertainty.

In order to demonstrate that the architecture successfully produces smooth and uninterrupted trajectories over long distances without human supervision, several experiments were performed. The first two experiments are composed of single missions in order to demonstrate the behavior of the robot to reach the goal location while avoiding obstacles. The last two experiments are composed of multiple missions that consist of repeating sequential missions to obtain a long term mission. Failures on landmark detection and kidnappings can occur during these multiple missions. In all experiments, the corridors and the lab were crowded with people that were performing their working routines.

The environment has approximately  m2 of available free space (see Figure 9(b) for the complete map and Figure 9(a) for the zoomed partial map of regions to .). Black lines depict parts of the environment that are unavailable for the robot while red dashed lines can be viewed as imaginary lines useful for the mobile navigation (see [24] for more details). A total number of 32 landmarks for localization were distributed along the ceiling.

Figure 9: Map of the indoor environment for the field experiments. , , identify the different regions. Red circle identifies the starting position of the robot, . Green cross identifies the goal location, , where the missions end. Blue line represents the path followed by the robot. Empty green rectangles represent dynamic obstacles and filled red rectangles represent static obstacles. Striped areas depict forbidden area unreachable by the robot.
5.1. Experiment 1

In this experiment (see Figure 10) the robot must cope with dynamic obstacles (e.g., people) (snapshots A and B), unexpected static obstacles (snapshots A and C), and narrow passages (snapshots and ). The single mission starts in region and the goal location is located in region (see Figure 9). The time frame assigned s is sufficient for the robot to complete the mission if it moves with an average velocity of 0.3 ms−1. The mission is evolved along regions and the people are already familiar with the behavior of the robot.

Figure 10: Snapshots of the robot performing the mission of experiment .

Figure 11 shows the velocity of the robot (red continuous line). Green circles depict instants of time in which obstacles force the robot to decrease its velocity in order to ensure a safe circumnavigation (see [24] for the velocity reduction mechanism). Panels A and C of Figure 10 depict these instants of time. After the obstacle circumnavigation, the velocity is increased to compensate for the delay caused by the reduction in velocity.

Figure 11: Linear velocity performed by the robot, (red continuous line), and velocity generated by the Timing Control, . Capital letters identify snapshots illustrated in Figure 10.

Figure 12 shows the evolution of . Clearly, its upper bound is well below 1 in agreement with (6). This suggests that the composition of the modules of the architecture is a contraction and converges to the unique fixed point within . This identifies the mission success; that is, the robot reached the goal location within despite the obstacles in the path.

Figure 12: Evolution of along the single mission of experiment 1. Capital letters identify snapshots illustrated in Figure 10.
5.2. Experiment 2

This experiment aims at verifying that the stability conditions hold during a single mission in which the robot has to move from the laboratory (region ) to the goal location in the corridor (region ) (see Figure 9). During this mission (see snapshots in Figure 13), the robot has to pass through a door that divides the laboratory from the corridor (panel B) while facing the same kind of obstacles detected in experiment 1 (panels A, C, and D).

Figure 13: Snapshots of the robot performing the mission of experiment 2.

Figure 14 shows that condition (6) holds during the mission, which means that the architecture remains stable and the robot reaches the goal location within the specified movement time, . Despite the circumnavigation of persons illustrated by panels C and D in Figure 13, the robot was able to successfully complete the mission.

Figure 14: Evolution of along the single mission of experiment 2. Capital letters identify snapshots illustrated in Figure 13.
5.3. Experiment 3

In this experiment, the robot performs a long-run continuous mission consisting of reaching a sequence of 7 different goal points 260 times. When the robot reaches a goal location, it immediately assumes a new goal and moves towards it. When the final goal is reached, the robot’s next goal is set as the first goal, thus repeating the cycle. Each goal point is considered as a single mission and the goals are chosen such that the robot covers a large part of the free space in the populated environment.

The total time spent by the robot to complete the long mission was approximately 2.5 hours, covering a total distance of approximately 1640 m. Figure 15 depicts the trajectory executed by the robot during the missions in this experiment. Note that the robot reached the neighborhood of the goal points (identified by blue circles).

Figure 15: Trajectory followed by the robot (green lines) during experiment 3. Black lines represent forbidden space. Red dashed lines represent the boundaries of the regions (see [24] for more details).

No collisions with obstacles (static or people) occurred during the experiment. While some persons tried to obstruct the robot, others treated the robot as an obstacle and kept a proper trajectory to avoid colliding with it. During the missions, the robot failed to reach the goal location within the timing constraint in 8 missions yielding a reliability percentage of approximately . The failed missions occurred because the robot faced obstacles that prevent it from reaching the goal location within . This is sufficient to guarantee that stability condition (6) is not verified. Even though the timing constraints were not accomplished in 8 missions, the robot avoided the obstacles and succeeded in reaching the goal location.

Figure 16 shows the evolution of . Condition (6) holds when the mission is completed within and it is not verified when the mission is considered failed (see red data in Figure 16 for failed missions). As expected, the performance timing index (9) is sufficient to identify the failed missions always in which the robot does not fulfill the timing constraint and mission is not completed in time. The high density of obstacles in the environment can cause complex situations in which the robot was unable to deal with them and reach the goal location within . Other situations that lead to the mission failures are landmark detection failures and the kidnapping problem. If landmarks are not detected, the Localization System is unable to reset the odometry, and the uncertainty on the robot’s localization grows unbounded.

Figure 16: Evolution of along the sequence of missions. Values larger than 1 mean that the mission was not successfully completed. Graphic is bounded to approximately 1.1, but for failed missions is much higher.
5.4. Experiment 4

At the beginning of this experiment, a set of 242 locations over the map were randomly generated such that no two successive locations were from the same region of the environment. The robot had to autonomously navigate to these locations sequentially. The for each mission was generated such that the required average robot’s velocity to complete each mission was 0.3 ms−1. This obeys the robot physical constraints in terms of maximum velocity (0.8 ms−1).

Figure 17 depicts the goal location for the missions in this experiment (green circles for successful missions and red crosses for failed missions). During the missions, the number and type of obstacles vary between only static obstacles and multiple dynamic obstacles (people). The percentage of failed missions is higher inside the laboratory (regions , , , , ) than in the corridor (region ). The laboratory has more narrow spaces, which facilitates the creation of unlucky configurations of obstacles that prevent the robot from reaching the goal within the assigned .

Figure 17: Goal locations of the missions realized by the robot during experiment 4. Green circles represent successful missions whereas red crosses represent unsuccessful missions. There are a total number of 230 successful missions and 12 unsuccessful missions.

The robot performs 242 missions and covers a distance of approximately 1261 m in 2 hours. Results of this experiment are summarized in Table 2. The average for each mission is approximately 22 s, and the average linear velocity is 0.27 ms−1. Note that this velocity is lower than the required average velocity used to calculate (0.30 ms−1). This is a consequence of the failed missions, in which the robot moves at a low velocity (0.1 ms−1) when it verifies that the goal location is unreachable in the remaining time. The percentage of successful missions in this experiment () was slightly lower than the percentage verified in experiment 3 (). This means that the robot had more difficulty completing the missions within the temporal constraint. Some of the random goal locations were difficult for the robot to reach (e.g., corners and locations close to obstacles). 12 failed missions were identified by condition (6) and are illustrated in Figure 18. Clearly, when the robot does not reach the goal location in time, the performance timing index (9) makes .

Table 2: Results of the sequential missions performed in experiment 4.
Figure 18: Evolution of along the sequence of missions. Values larger than 1 mean that the mission was not successfully completed due to the performance timing index (9).

6. Conclusions

The paper describes field experiments that cover a wide range of realistic scenarios in a typical university indoor environment where students and staff move frequently around. These include single missions as well as long term missions that last for hours.

The stability condition based on Contraction theory that includes the performance timing index identifies the ability of the architecture to successfully lead the robot towards the goal while respecting the timing constraints. The inclusion of a timing constraint into the architecture can be replaced or combined with other performance constraints and stability can still be analyzed with the Contraction theory, provided that this constraint is somehow connected to any module of the architecture. Moreover, this property is extendable to any other module of control or feedthrough map.

Monte Carlo tests demonstrate the ability of the localization system to provide good enough estimates. The obtained results suggest a certain robustness of the architecture to uncertainties introduced by some of the blocks. This might be relevant in experiments with similar characteristics. Dependability issues were evaluated considering reliability, integrity, and safety. The robot successfully recovers from problematic situations, such as the kidnapping problem, unexpected collisions with obstacles, and landmarks detection failures. The uncertainty on the localization system as well as the number of necessary landmarks can be reduced by using a visual sensor with more capabilities. Nevertheless, the localization system with the current visual sensing is sufficiently accurate for the robot to move in the range of kilometers.

Future work includes (i) improving the localization system to use less landmarks, (ii) combining maps to improve the accuracy of the localization system, and (iii) extending the stability results when some of the blocks in the architecture have nonsmooth dynamics. Also, new field experiments have to be carried out in other complex environments, including a real hospital environment that is the main motivation for this work.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This work has been supported by Fundação para a Ciência e Tecnologia (FCT) within Projects UID/EEA/50009/2013, PEst-OE/EEI/UI0319/2014 and Grant SFRH/BD/68805/2010.


  1. J. B. Silva, V. Matos, and C. Santos, “Timed trajectory generation for a toy-like wheeled robot,” in Proceedings of the 36th Annual Conference on IEEE Industrial Electronics Society (IECON ’10), pp. 1645–1650, IEEE, Glendale, Calif, USA, November 2010. View at Publisher · View at Google Scholar · View at Scopus
  2. A. J. Ijspeert, J. Nakanishi, and S. Schaal, “Learning attractor landscapes for learning motor primitives,” in Proceedings of the Advances in Neural Information Processing Systems 15 (NIPS ’02), pp. 1547–1554, MIT Press, 2002.
  3. M. Buhler and S. Koditscheck, “Planning and control of a juggling robot,” International Journal of Robotics Research, vol. 13, no. 2, pp. 101–118, 1994. View at Google Scholar
  4. G. Schöner and C. Santos, “Control of movement time and sequential action through attractor dynamics: a simulation study demonstrating object interception and coordination,” in Proceedings of the 9th International Symposium on Intelligent Robotic Systems (SIRS ’01), Toulouse, France, July 2001.
  5. M. Tuma, I. Iossifidis, and G. Schöner, “Temporal stabilization of discrete movement in variable environments: an attractor dynamics approach,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’09), pp. 863–868, Kobe, Japan, May 2009. View at Publisher · View at Google Scholar · View at Scopus
  6. W. Lohmiller and J.-J. E. Slotine, “On contraction analysis for non-linear systems,” Automatica, vol. 34, no. 6, pp. 683–696, 1998. View at Publisher · View at Google Scholar · View at Scopus
  7. J. Jouffroy and J.-J. E. Slotine, “Methodological remarks on contraction theory,” in Proceedings of the 43rd IEEE Conference on Decision and Control (CDC ’04), vol. 3, pp. 2537–2543, IEEE, Nassau, Bahamas, December 2004. View at Publisher · View at Google Scholar · View at Scopus
  8. W. Wang and J.-J. E. Slotine, “On partial contraction analysis for coupled nonlinear oscillators,” Biological Cybernetics, vol. 92, no. 1, pp. 38–53, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  9. D. Angeli, “A lyapunov approach to incremental stability properties,” IEEE Transactions on Automatic Control, vol. 47, no. 3, pp. 410–421, 2002. View at Publisher · View at Google Scholar · View at Scopus
  10. V. Fromion, G. Scorletti, and G. Ferreres, “Nonlinear performance of a PI controlled missile: an explanation,” International Journal of Robust and Nonlinear Control, vol. 9, no. 8, pp. 485–518, 1999. View at Publisher · View at Google Scholar · View at Scopus
  11. J. Jouffroy and T. I. Fossen, “A tutorial on incremental stability analysis using contraction theory,” Modeling, Identification and Control, vol. 31, no. 3, pp. 93–106, 2010. View at Publisher · View at Google Scholar · View at Scopus
  12. M. S. Branicky, “Multiple lyapunov functions and other analysis tools for switched and hybrid systems,” IEEE Transactions on Automatic Control, vol. 43, no. 4, pp. 475–482, 1998. View at Publisher · View at Google Scholar · View at Scopus
  13. J. Slotine and W. Li, Applied Nonlinear Control, Prentice Hall, Upper Saddle River, NJ, USA, 1991.
  14. J. Jouffroy, “A simple extension of contraction theory to study incremental stability properties,” in Proceedings of the European Control Conference (ECC ’03), pp. 1315–1321, Cambridge, UK, September 2003.
  15. E. D. Sontag, “Smooth stabilization implies comprime factorization,” IEEE Transactions on Automatic Control, vol. 34, no. 4, pp. 435–443, 1989. View at Publisher · View at Google Scholar · View at Scopus
  16. J. M. Toibero, F. Roberti, R. Carelli, and P. Fiorini, “Switching control approach for stable navigation of mobile robots in unknown environments,” Robotics and Computer-Integrated Manufacturing, vol. 27, no. 3, pp. 558–568, 2011. View at Publisher · View at Google Scholar · View at Scopus
  17. A. Benzerrouk, L. Adouane, and P. Martinet, “Lyapunov global stability for a reactive mobile robot navigation in presence of obstacles,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’10), Anchorage, Alaska, USA, May 2010.
  18. E. Freiret, T. Bastos-Filho, M. Sarcinelli-Filho, and R. Carelli, “A control architecture for mobile robots using fusion of the output of distinct controllers,” in Proceedings of the IEEE International Symposium on Intelligent Control, pp. 142–147, Vancouver, Canada, 2002. View at Publisher · View at Google Scholar
  19. S. Thrun, “Learning metric-topological maps for indoor mobile robot navigation,” Artificial Intelligence, vol. 99, no. 1, pp. 21–71, 1998. View at Publisher · View at Google Scholar · View at Scopus
  20. J. Minguez, “Integration of planning and reactive obstacle avoidance in autonomous sensor-based navigation,” in Proceedings of the IEEE IRS/RSJ International Conference on Intelligent Robots and Systems (IROS ’05), pp. 2486–2492, IEEE, Alberta, Canada, August 2005. View at Publisher · View at Google Scholar · View at Scopus
  21. Z. Zivkovic, B. Bakker, and B. Kröse, “Hierarchical map building and planning based on graph partitioning,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’06), pp. 803–809, IEEE, Orlando, Fla, USA, May 2006. View at Publisher · View at Google Scholar · View at Scopus
  22. K. Konolige, E. Marder-Eppstein, and B. Marthi, “Navigation in hybrid metric-topological maps,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’11), pp. 3041–3047, Shanghai, China, May 2011. View at Publisher · View at Google Scholar · View at Scopus
  23. J. Sequeira, C. Santos, and J. Silva, “Dynamical systems in robot control architectures: a building block perspective,” in Proceedings of the 12th International Conference on Control Automation Robotics & Vision (ICARCV ’12), pp. 82–87, IEEE, Guangzhou, China, December 2012. View at Publisher · View at Google Scholar
  24. J. Silva, J. Sequeira, and C. Santos, “A stability analysis for a dynamical robot control architecture,” Intelligent Autonomous Vehicles, vol. 8, no. 1, pp. 225–230, 2013. View at Google Scholar
  25. O. Michel, “Webots: professional mobile robot simulation,” International Journal of Advanced Robotic Systems, vol. 1, no. 1, pp. 39–42, 2004. View at Google Scholar · View at Scopus
  26. A. Bicchi, M. Bavaro, G. Boccadamo et al., “Physical human-robot interaction: dependability, safety, and performance,” in Proceedings of the 10th IEEE International Workshop on Advanced Motion Control (AMC ’08), pp. 9–14, IEEE, Trento, Italy, March 2008. View at Publisher · View at Google Scholar · View at Scopus
  27. B. Graf, “Dependability of mobile robots in direct interaction with humans,” in Advances in Human-Robot Interaction, vol. 14 of Springer Tracts in Advanced Robotics, pp. 223–239, Springer, Berlin, Germany, 2005. View at Publisher · View at Google Scholar
  28. S. R. Pring and C. J. Budd, “The dynamics of regularized discontinuous maps with applications to impacting systems,” SIAM Journal on Applied Dynamical Systems, vol. 9, no. 1, pp. 188–219, 2010. View at Publisher · View at Google Scholar
  29. J. Awrejcewicz, M. Fekanb, and P. Olejnik, “On continuous approximation of discontinuous systems,” Journal of Nonlinear Analysis, vol. 62, no. 7, pp. 1317–1331, 2005. View at Publisher · View at Google Scholar · View at MathSciNet
  30. M.-F. Danca and S. Codreanu, “On a possible approximation of discontinuous dynamical systems,” Chaos, Solitons and Fractals, vol. 13, no. 4, pp. 681–691, 2002. View at Publisher · View at Google Scholar · View at Scopus
  31. C. Santos, “Generating timed trajectories for an autonomous vehicle: a non-linear dynamical systems approach,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’04), vol. 4, pp. 3741–3746, New Orleans, La, USA, April 2004. View at Publisher · View at Google Scholar
  32. S. Dégallier, C. P. Santos, L. Righetti, and A. Ijspeert, “Movement generation using dynamical systems: a humanoid robot performing a drumming task,” in Proceedings of the 6th IEEE-RAS International Conference on Humanoid Robots, pp. 512–517, Genova, Italy, December 2006. View at Publisher · View at Google Scholar · View at Scopus
  33. G. Schöner and M. Dose, “A dynamical systems approach to task-level system integration used to plan and control autonomous vehicle motion,” Robotics and Autonomous Systems, vol. 10, no. 4, pp. 253–267, 1992. View at Publisher · View at Google Scholar · View at Scopus
  34. B. Hasselblatt and A. Katok, A First Course in Dynamics, Cambridge University Press, Cambridge, UK, 2003.
  35. H. Lütkepohl, Handbook of Matrices, John Wiley & Sons, 1996.
  36. W. Lohmiller and J.-J. E. Slotine, “Control system design for mechanical systems using contraction theory,” IEEE Transactions on Automatic Control, vol. 45, no. 5, pp. 984–989, 2000. View at Publisher · View at Google Scholar · View at Scopus
  37. J.-J. Slotine and W. Lohmiller, “Modularity, evolution, and the binding problem: a view from stability theory,” Neural Networks, vol. 14, no. 2, pp. 137–145, 2001. View at Publisher · View at Google Scholar · View at Scopus
  38. H. Kato and M. Billinghurst, “Marker tracking and hmd calibration for a video-based augmented reality conferencing system,” in Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR ’99), pp. 85–94, San Francisco, Calif, USA, 1999. View at Publisher · View at Google Scholar
  39. P. Sala, R. Sim, and A. Shokoufandeh, “Placing artificial visual landmarks in a mobile robot workspace,” IEEE Transactions on Robotics, vol. 22, no. 2, pp. 334–349, 2006. View at Google Scholar · View at Scopus
  40. D. Meyer-Delius, M. Beinhofer, A. Kleiner, and W. Burgard, “Using artificial landmarks to reduce the ambiguity in the environment of a mobile robot,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’11), pp. 5173–5178, Shanghai, China, May 2011. View at Publisher · View at Google Scholar · View at Scopus
  41. M. Beinhofer, J. Müller, and W. Burgard, “Effective landmark placement for accurate and reliable mobile robot navigation,” Robotics and Autonomous Systems, vol. 61, no. 10, pp. 1060–1069, 2013. View at Publisher · View at Google Scholar · View at Scopus
  42. S. Krishnan, P. Sharma, Z. Guoping, and O. H. Woon, “A UWB based localization system for indoor robot navigation,” in Proceedings of the IEEE International Conference on Ultra-Wideband (ICUWB ’07), pp. 77–82, IEEE, Singapore, September 2007. View at Publisher · View at Google Scholar · View at Scopus
  43. J. González, J. L. Blanco, C. Galindo et al., “Mobile robot localization based on ultra-wide-band ranging: a particle filter approach,” Robotics and Autonomous Systems, vol. 57, no. 5, pp. 496–507, 2009. View at Publisher · View at Google Scholar · View at Scopus
  44. A. Llarena, J. Savage, A. Kuri, and B. Escalante-Ramírez, “Odometry-based viterbi localization with artificial neural networks and laser range finders for mobile robots,” Journal of Intelligent & Robotic Systems, vol. 66, no. 1-2, pp. 75–109, 2012. View at Publisher · View at Google Scholar · View at Scopus
  45. R. Filippini, S. Sen, and A. Bicchi, “Toward soft robots you can depend on,” IEEE Robotics & Automation Magazine, vol. 15, no. 3, pp. 31–41, 2008. View at Publisher · View at Google Scholar · View at Scopus
  46. S. P. Engelson and D. V. McDermott, “Error correction in mobile robot map learning,” in Proceedings of the IEEE International Conference on Robotics and Automation, vol. 3, pp. 2555–2560, IEEE, Nice, France, May 1992. View at Publisher · View at Google Scholar
  47. S. Thrun, D. Fox, W. Burgard, and F. Dellaert, “Robust Monte Carlo localization for mobile robots,” Artificial Intelligence, vol. 128, no. 1-2, pp. 99–141, 2001. View at Publisher · View at Google Scholar · View at Scopus