Journal of Robotics

Volume 2016, Article ID 2560573, 18 pages

http://dx.doi.org/10.1155/2016/2560573

## Decentralized Cooperative Localization Approach for Autonomous Multirobot Systems

IS Lab, Faculty of Engineering and Applied Science, Memorial University of Newfoundland, St. John’s, NL, Canada A1B 3X5

Received 4 November 2015; Revised 5 February 2016; Accepted 11 February 2016

Academic Editor: Giovanni Muscato

Copyright © 2016 Thumeera R. Wanasinghe et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This study proposes the use of a split covariance intersection algorithm (Split-CI) for decentralized multirobot cooperative localization. In the proposed method, each robot maintains a local cubature Kalman filter to estimate its own pose in a predefined coordinate frame. When a robot receives pose information from neighbouring robots, it employs a Split-CI based approach to fuse this received measurement with its local belief. The computational and communicative complexities of the proposed algorithm increase linearly with the number of robots in the multirobot systems (MRS). The proposed method does not require fully connected synchronous communication channels between robots; in fact, it is applicable for MRS with asynchronous and partially connected communication networks. The pose estimation error of the proposed method is bounded. As the proposed method is capable of handling independent and interdependent information of the estimations separately, it does not generate overconfidence state estimations. The performance of the proposed method is compared with several multirobot localization approaches. The simulation and experiment results demonstrate that the proposed algorithm outperforms the single-robot localization algorithms and achieves approximately the same estimation accuracy as the centralized cooperative localization approach, but with reduced computational and communicative cost.

#### 1. Introduction

With the advancement of information and communication technology, robots from different vendors, different domains of operation, and various perception and operational capabilities are unified into a single framework in order to perform a collaborative task. These heterogeneous multirobot systems (MRS) offer greater coverage in exploration and searching tasks, robustness against individual failures, improved productivity through parallel tasking, and implementation of more complex operations than a single robot [1]. Integration of robots from different domains of operation, such as a robot team with unmanned ground vehicles (UGV) and unmanned aerial vehicles (UAV), enhances team perception while increasing accessibility in a cluttered environment [2].

Robots in a heterogeneous MRS may host different exteroceptive and proprioceptive sensory systems resulting in a significant variation in self-localization across teammates. Interrobot observations and flow of information between teammates can establish a sensor sharing technique such that the localization accuracy of each member improves over the localization approaches that solely depend on the robot’s onboard sensors. This sensor sharing technique is known as cooperative localization and was initially developed to improve the localization accuracy of MRS that are navigating in global positioning system- (GPS-) denied areas or areas without preinstalled (known) landmarks [3, 4]. However, it is reported that a minimum of one agent in a cooperative localization team should possess absolute positioning capabilities in order to have a bounded estimation error and uncertainty [5].

Available cooperative localization algorithms can be classified into three groups: centralized/multicentralized approaches, distributed approaches, and decentralized approaches. Although the centralized/multicentralized approaches generate consistent pose estimate with high accuracy, these approaches entail considerably higher computational and communicative cost. Distributed cooperative localization algorithms attempt to reduce the communication cost. However, the computational cost of most of the distributed algorithms remains as high as the centralized/multicentralized approaches. Decentralized schemes have reduced both the computational and communication cost of multirobot cooperative localization tasks. However, available decentralized cooperative localization (DCL) schemes sometime generate overconfidence pose estimates for robots in an MRS.

Therefore, computing a nonoverconfidence state estimate with bounded error while reducing computation and communication cost is a key requirement for successful implementation of multirobot collaborative missions that rely on cooperative localization. In this paper, we extend our previous work presented in [6] and propose a scalable DCL approach for heterogeneous MRS which is guarantee for nonoverconfidence pose estimate with the bounded estimation error. The proposed method applies the split covariance intersection (Split-CI) algorithm to accurately track independencies and interdependencies between teammates’ local pose estimates. The recently developed cubature Kalman filter (CKF) is exploited for sensor fusion. Each robot periodically measures the relative pose of its neighbours. These measurements and associated measurement uncertainties are first transformed into a common reference frame and then communicated to corresponding neighbour robots. Once a robot receives pose measurements from a neighbour, it uses the Split-CI algorithm to fuse received measurements with its local belief. The work presented in this paper offers the following key contributions: (i) a novel DCL approach that combines the properties of the Split-CI algorithm and CKF in order to avoid double counting of the information while generating pose estimation with bounded estimation error, (ii) an extension to the general CKF to accurately calculate and maintain both the independent and the dependent covariances, and (iii) a consistent and debiased algorithm to convert information between two Cartesian coordinate systems. The proposed algorithm has the following properties: (i) per-measurement computational and communication cost of the proposed DCL approach is constant, (ii) there is no requirement for a synchronous or fully connected communication network, (iii) the algorithm is robust against the single point of failure, and (iv) the algorithm is capable of generating a nonoverconfidence pose estimate with the bounded estimation error for each member of the MRS.

#### 2. Background

In 1994, Kurazume et al. proposed the first cooperative localization algorithm [3]. Ever since, numerous approaches for cooperative localization have been reported. These implementations can be categorized into three main groups: () centralized/multicentralized cooperative localization, () distributed cooperative localization, and () decentralized cooperative localization.

Centralized cooperative localization approaches augment each robot’s pose into a single state vector and perform state estimation tasks at a central processing unit [3, 4, 8–12]. The computational cost of the centralized algorithms scales to , where is the number of robots in the MRS. Since these implementations require each robot to communicate its high frequency egocentric data and interrobot observations to a central processing unit, communication links should have wider bandwidth. Multicentralized approaches have been reported to improve the robustness of the general centralized approaches against the single point of failure by duplicating the state estimation process on a few or all robots in the team [12, 13]. This causes an increasing per measurement communicative cost linearly with respect to the number of independent processing units. A multicentralized approach that enables cooperative localization for sparsely communicating robot networks requires more communication bandwidth and onboard memory than traditional multicentralized approaches [14].

In the distributed cooperative localization, each robot locally runs a filter to fuse egocentric measurements and propagate the state over time. As a result, there is no requirement for exchanging high-frequency egocentric measurements with teammates or with a central server. This enables the implementation of distributed cooperative localization algorithms using communication channels with limited bandwidth. However, interrobot observations are still fused at a central processor leading to the same computational complexity as the centralized/multicentralized approaches (i.e., ). Work presented in [15] proposed distributed maximum a posteriori estimation for cooperative localization and achieved reduced computational complexity, that is, , and also improved robustness against the single point of failure.

Decentralized algorithms focus on reducing both computational and communicative complexity. In general, each robot in a DCL team locally runs a filter (estimator) to fuse its ego-centric measurements as well as the interrobot observations received from its neighbours with its local belief. Available decentralized algorithms perform the sensor fusion either in a suboptimal manner [6, 16–21] or in an approximate manner [22–25]. Approximate approaches assume that a given robot local pose estimate is independent from the measurements sent by neighbours. This assumption leads to an overconfidence state estimation. Work presented in [17] uses a* dependency-tree* to track the recent interaction between robots. However, this approach maintains only the recent interdependencies of robot pose estimate; it tends to be overconfidence. An interlaced extended Kalman filter- (EKF-) based suboptimal filtering approach is presented in [18, 19] to avoid the possibility of generating an overconfidence state estimation. This approach requires each robot in the MRS to maintain a bank of EKFs representing the interaction between teammates. Although it produces a nonoverconfidence state estimation, this book-keeping approach is unscalable, as the number of EKF runs on a single robot increases exponentially with the number of robots in the MRS. A suboptimal filtering approach called* channel filtering* is presented in [16] which requires a communication network without loops. However, a communication network without loops is an unrealistic assumption for the practical implementation of cooperative localization. An extended information filter is used for implementing the DCL [20], in which each robot maintains the history of robot-to-robot relative measurement updates. CI-based approaches are also reported for the DCL [21]. However, the general CI algorithm neglects possible independencies between local estimates. This causes more conservative estimates and may produce an estimation error covariance which is larger than that of the best unfused estimate [26].

Available cooperative localization algorithms use four types of interrobot observations: relative range-only [27, 28], relative bearing-only [29, 30], both relative range and relative bearing [31–33], and full relative pose measurement [5, 6, 21, 24, 25, 34]. Full relative pose measurement-based cooperative localization schemes always generate more accurate pose estimate than range-only, bearing-only, and range-and-bearing measurement systems [35]. The lowest estimation accuracy was found with the bearing-only measurement system [35].

#### 3. Problem Statement

To facilitate the mathematical formulation, let represent the set that contains the unique identification indices of all robots in the MRS. The cardinality of this set, that is, , corresponds to the total number of robots in the MRS. Further, it is assumed that the robots are represented by where the identification indices of the robots range from to . The set and its cardinality may or may not be known to each agent in the MRS.

It is assumed that the robots are navigating in a flat, two-dimensional (2D) terrain. Each robot in the MRS hosts a communication device to exchange information between teammates; an interoceptive sensory system to measure egomotion (linear and angular velocities); and an exteroceptive sensory system to measure the relative pose of neighbouring robots. Known data correspondence is assumed for the exteroceptive sensory system. Besides these sensory systems and communication modules, some of the robots in the MRS host a sensory system, such as a differential global positioning system (DGPS), to receive absolute positioning information periodically.

##### 3.1. Robot’s Motion Model

Robot navigation in a 2D space is modeled by a general three-degree-of-freedom (3-DOF) discrete-time kinematic modelwhere is the robot’s pose with respect to a given coordinate frame (or global coordinate frame) at discrete time . The nonlinear state propagation function and the sampling time are represented by and , respectively. is the system input and , where . The parameters and are nominal linear velocities in - and -directions, respectively. is the nominal angular velocity. represents the additive white Gaussian noise term with covariance . For nonholonomic robots, terms associated with linear velocities in -direction, that is, and , should be omitted from the system model.

##### 3.2. Interrobot Relative Measurement Model

Consider the scenario where robot measures relative pose of robot . This relative pose measurement is modeled aswith where is the relative pose of robot as measured by ; that is, , where , , and are -position, -position, and the orientation of robot with respect to robot . This pose measurement is on the body-fixed coordinate system of robot . The nonlinear measurement function is represented by . The measurement noise covariance, , is assumed to be an additive white Gaussian noise with covariance . Parameters and represent the distance between two robots and the sensing range of robot , respectively. It is assumed that the communication range of a given robot is greater than or equal to its sensing range. represents the set that contains the unique identification indices of robots which are within the sensing range of robot at the discrete time . The matrix transpose operation is represented by . At a given time step, there may be no robots, one robot, or multiple robots operating within the sensing range of . The maximum number of robots that can operate within the sensing range of is one robot less than the total number of robots in the MRS, that is, . These imply that for . Letrepresent all relative pose measurements acquired by at time step , where . Symbol represents that the distribution is Gaussian with mean (actual measurement) and covariance . The cardinality of the set follows the same property as set ; that is, for .

#### 4. Decentralized Cooperative Localization Algorithm

##### 4.1. State Propagation

The objective of the state propagation step is to predict current pose and the associated estimation uncertainties of a given robot using both the robot’s posterior state density and the odometry reading at the previous time step. In order to avoid the cyclic update (cyclic update is the process that uses the same information more than once), each robot maintains two covariance matrices: total covariance and independent covariance. Once the total and independent covariances are known, dependent covariance can be calculated aswhere , , and are total covariance, independent covariance, and dependent covariance of ’s pose estimation at time step , respectively.

This study employs CKF for sensor fusion. CKF is a recently developed suboptimal nonlinear filter which uses the spherical-radial cubature rule to solve the multidimensional integral associated with the Bayesian filter under the Gaussian approximation [36]. CKF is a Jacobian-free approach that is always guaranteed to have a positive definite covariance matrix and has demonstrated superior performance than the celebrity EKF and the unscented Kalman filter (UKF) [37–39].

For a system with state variables, the third-order spherical-radial cubature rule selects cubature points in order to compute the standard Gaussian integral, as follows:where the square-root factor of the covariance satisfies the equality . The cubature points are given by where represents the th elementary column vector. In this paper, robot pose and odometry vectors are augmented into a single state vector leading to , where is the size of pose vector and is the size of the odometry vector. Further, the general CKF algorithm is extended to accommodate independent covariance calculation capabilities.

The proposed state propagation approach is summarized as follows.

*Algorithm 1 (state propagation). **Data*. Assume at time posterior density function of robot’s pose estimation , independent covariance matrix , and odometry reading are known.*Result*. Calculate the predictive density function of robot’s pose estimation and associated independent covariance matrix .(1)Augment state end odometry reading into single vector:(2)Compute the corresponding covariance matrix: (3)Factorize (4)Generate cubature points : where (5)Propagate each set of cubature points through nonlinear state propagation function given in (1) :(6)Predict next state: (7)Estimate the predictive error covariance: (8)To calculate independent covariance, construct new block diagonalization covariance matrix as follows: (9)Factorize , then generate a new set of cubature points, and propagate this new cubature point set through the nonlinear state propagation function (1) (refer to lines (), (), and () for equations).(10)Predict new state using independent covariance: (11)Estimate the independent predictive error covariance: is size of robot’s pose vector, is size of robot’s odometry vector, is matrix with rows and columns and all entries are zeros, represents , and represents .

The algorithm is initialized with known prior density , independent covariance matrix , and odometry reading at the previous time step (say, time step ). The algorithm predicts the robot pose for the next time step along with associated total and independent covariances. First, the algorithm augments the estimated pose vector with the odometry vector at time (line ()). The associated covariance matrix is then computed by block-diagonalization of the estimation and process covariance matrices (line ()). In the CKF, a set of the cubature points is used to represent the current estimated pose and associated estimation uncertainties (line ()). To generate these cubature points, the square-root factor of the covariance matrix is required. Any matrix decomposition approach that preserves the equality given in (10) can be exploited to compute the square-root factor of the covariance matrix (line ()). The cubature points that represent current state and odometry measurements are evaluated on the nonlinear state propagation function (line ()), which generates the cubature point distribution for a predicted state. The predicted pose (or state) of the robot is the average of the propagated cubature points (line ()). Total predictive covariance is then computed from (14). Once the total predictive covariance is calculated, a new block-diagonalized covariance matrix, that is, , is generated using the independent covariance matrix of time , that is, , and process covariance matrix, that is, (line ()). After computing , its square-root factor is computed (similar to line ()); then a set of cubature points are generated using the new square-root factor (similar to line ()); and, finally, the newly generated cubature points are propagated through the nonlinear state propagation function (similar to line ()) (line ()). These steps are followed by the computation of prediction for independent propagated state (line ()) and the associated covariance matrix (line ()).

##### 4.2. Computing Pose of Neighbours

At a relative pose measurement event each robot acquires the relative pose of its neighbours. These measurements are in the local coordinate system of the observing robot and are required to be transformed to the reference coordinate system prior to executing the sensor fusion at the neighbouring robot’s local processor. Assume that, at time , robot measures the relative pose of robot . This nonlinear coordinate transformation can be modeled aswhere is known as the pose composition operator and is the global pose of on the reference coordinate frame, as measured by . The superscript asterisk “” is used to indicate that the measurement is on the reference coordinate system. Symbol has the same meaning as (2). Since this Cartesian-to-Cartesian transformation is nonlinear, a cubature point-based approach, as summarized in Algorithm 2, is exploited to achieve consistent and unbiased coordinate transformation (see Appendix).

*Algorithm 2 (relative to global conversion). **Data*. Assume at time the predictive density function of a robot’s (say, ) pose estimation , independent covariance matrix , and relative pose measurement of a neighbour (say, ) are available.*Result*. Calculate global pose measurement of , that is, , and associated independent and dependent measurement covariances, i.e., and .(1)Augment the predictive state and relative pose into single vector: (2)Construct corresponding covariance matrix: (3)Factorize (4)Generate set of cubature points : where (5)Perform coordinate transform for each set of cubature points : (6)Compute global pose of neighbour: (7)Compute total noise (error) covariance: (8)Construct a block-diagonalized matrix using independent predictive covariance and measurement noise covariance: (9)Factorize and then generate a new set of cubature points followed by the coordination transformation for each cubature point (refer to lines (), (), and () for equations).(10)Compute coordinate transformed measurement using independent covariance: (11)Estimate independent covariance for the pose measurement: (12)Estimate dependent covariance for the pose measurement:

The algorithm is initialized with a known predictive density of the pose estimation of the observing robot along with the predictive independent covariances. At an inter-robot-relative-pose measurement event, the observing robot augments its predictive pose and relative pose measurement into a single state vector (line ()). The associated covariance matrix is obtained by block-diagonalization of the predictive total covariance () and noise covariance of the relative pose measurement () (line ()). This block-diagonalized covariance matrix is then factorized and exploited for generating a set of cubature points to represent the state vector (lines () and ()). The generated cubature points are evaluated on the nonlinear Cartesian-to-Cartesian coordinate transformation function, that is, (18), in order to compute the coordinate transformed cubature points (line ()). This step is followed by the computation of the observed robot pose in the reference coordinate system (line ()) and associated total noise (error) covariance matrix (line ()). Once the total noise covariance is calculated, the algorithm constructs a new block-diagonalized covariance matrix, , using the predictive independent covariance matrix and the relative pose measurement noise covariance matrix (line ()). After computing , its square-root factor is computed similar to line (); then a set of cubature points are generated using the new square-root factor (similar to line ()); and, finally, newly generated cubature points are transformed from the local coordinate system to the global coordinate system (similar to line ()) (line ()). These steps are followed by the computing of coordinate transformed measurement and associated independent noise covariance matrix (lines () and ()). Finally, the dependent covariance of the coordinate transformed measurement is calculated as the difference between total and independent covariances (line ()).

##### 4.3. Update Local Pose Estimation Using the Pose Sent by Neighbours

When a robot receives a pose measurement from a neighbour, this measurement is fused with the observed robot’s local estimate in order to improve its localization accuracy.

*Algorithm 3 (state update with the measurement sent by neighbours). **Data*. Assume predictive density of robot pose estimation , the associated independent covariance matrix , and pose measurements from a neighbour along with the associated independent and dependent covariances are available.*Result*. Calculate posterior density of time , that is, , and the associated independent covariance matrix .(1)Calculate the predictive dependant covariance: (2)Compute the weighted predictive covariance (3)Compute the weighted measurement covariance (4)** if** measurement gate validated** then**(5)Compute Kalman gain (6)Update robot pose (7)Update total covariance (8)Update independent covariance (9)** else**(10)Assign predictive state and covariances into posterior state and covariances(11)** end if** is identity matrix of and is weighting coefficient and belongs to the interval

Algorithm 3 summarizes the steps involved in this measurement update. The algorithm is initialized by calculating the observed robot’s predictive dependent covariance (line ()). The weighted predictive covariance and weighted measurement covariance are then calculated as given in (31) and (32), respectively (lines () and ()). Coefficient belongs to the interval and can be determined such that the trace or determinant of the updated total covariance is minimized. Detection and elimination of outliers are important for preventing the divergence of state estimates. This requirement is fulfilled by employing an ellipsoidal measurement validating gate [40] (line ()). As the pose measurements from neighbours are in the reference coordinate frame, the measurement model of this sensor fusion becomes linear. Therefore, the linear Kalman filter can be exploited for sensor fusion. In this measurement update, the measurement matrix of the traditional Kalman filter becomes an identity matrix, (), of . Using the multiplicative property of the identity matrix (i.e., , where is ) the Kalman gains, the updated robot pose, and the associated total and independent covariance matrices can be computed from (33), (34), (35), and (36), respectively (lines (), (), (), and ()). For outliers, measurements are discarded and the predictive pose and the associated total and independent covariance matrices are directly assigned to the corresponding posterior quantities (lines () and ()).

##### 4.4. Update Local Pose Estimation Using the Measurement Acquired by the Absolute Positioning System

It is assumed that some of the robots in the MRS host DGPS sensor in order to measure global position information. This position measurement at time is modeled aswhere is the measurement vector and is the additive white Gaussian noise term with covariance . This measurement is linear and independent from the robot’s pose estimate. Thus, this measurement can be fused with the current state estimation using the general linear Kalman filter measurement update steps followed by This equation computes the updated independent covariance matrix at the event of DGPS measurement update.

##### 4.5. Sensor Fusion Architecture

This study assumes that each agent in the MRS initially knows its pose with respect to a given reference coordinate frame. The recursive state estimation framework of the proposed DCL algorithm is outlined in Algorithm 4 and is graphically illustrated in Figure 1.