Research Article  Open Access
Guohua Liu, Juan Guan, Haiying Liu, Chenlin Wang, "Multirobot Collaborative Navigation Algorithms Based on Odometer/Vision Information Fusion", Mathematical Problems in Engineering, vol. 2020, Article ID 5819409, 16 pages, 2020. https://doi.org/10.1155/2020/5819409
Multirobot Collaborative Navigation Algorithms Based on Odometer/Vision Information Fusion
Abstract
Collaborative navigation is the key technology for multimobile robot system. In order to improve the performance of collaborative navigation system, the collaborative navigation algorithms based on odometer/vision multisource information fusion are presented in this paper. Firstly, the multisource information fusion collaborative navigation system model is established, including mobile robot model, odometry measurement model, lidar relative measurement model, UWB relative measurement model, and the SLAM model based on lidar measurement. Secondly, the frameworks of centralized and decentralized collaborative navigation based on odometer/vision fusion are given, and the SLAM algorithms based on vision are presented. Then, the centralized and decentralized odometer/vision collaborative navigation algorithms are derived, including the time update, single node measurement update, relative measurement update between nodes, and covariance cross filtering algorithm. Finally, different simulation experiments are designed to verify the effectiveness of the algorithms. Two kinds of multirobot collaborative navigation experimental scenes, which are relative measurement aided odometer and odometer/SLAM fusion, are designed, respectively. The advantages and disadvantages of centralized versus decentralized collaborative navigation algorithms in different experimental scenes are analyzed.
1. Introduction
With the development of the age of intelligence, the application scenarios and task requirements of intelligent mobile robots are becoming more and more complex [1, 2]. A single robot has difficulty in meeting the needs of humans for highly automated robots. Therefore, the multirobot system, which fully embodies the group wisdom, has a broad prospect. The subject has attracted extensive attention of researchers [3–5]. The premise of robot collaborative navigation operation is that each robot needs to have a certain ability of navigation and localization. Therefore, it is very important for the research of robot collaborative navigation system [6].
The navigation accuracy of a single robot depends on its own navigation system, independent of other motion bodies. This navigation method is relatively simple, but due to the computing power of its own processor, sensor quality, and sensor field of vision and other factors, it limits the working range of the navigation system to a certain extent in the process of navigation and positioning. It also affects the ability of the navigation system to suppress noise, reducing errors and adapting to the complex environment. The collaborative navigation system of robot can make up for the single robot navigation system. When there is relative measurement between each robot, the relative information can be fully used to correct the navigation results. It is also possible to establish links with each other to realize the sharing of navigation resources among robots and to obtain better navigation performance [7–9].
In the research of mobile robot navigation and location technology, the development of Simultaneous Localization and Mapping (SLAM) technology provides a new thought for autonomous positioning of mobile robot under complex scenes [10]. At the same time, as a significant technical means to realize relative navigation, radio navigation and visual navigation are being paid much attention. It provides important technical support for further research on the collaborative navigation of multirobots [11–13]. However, the traditional combined navigation technology is far from meeting the growing demand for the high performance of collaborative navigation.
At present, most of the data fusion methods of multisource heterogeneous sensors in integrated navigation system use centralized data fusion algorithm. This algorithm has some defects in collaborative navigation of mobile robots, high communication cost, and poor robustness. However, the current research on decentralized data fusion algorithm, which is suitable for collaborative navigation environment is not mature enough [14, 15].
Therefore, based on the information fusion of multisource heterogeneous sensors and related SLAM data association algorithms, this paper puts forward a new concept, model, and solution of multisource heterogeneous sensor information fusion, which can significantly improve the performance of collaborative navigation, address some problems of cooperative navigation technology, explore the key technologies that need to be solved in the collaborative navigation process of multirobot platform, and provide theoretical basis and technical support for high performance of collaborative navigation application.
The structure of this paper is outlined as follows: in Section 2, in the multisource information fusion collaborative navigation system model, the principle of the module involved, and the source of error are analyzed. In Section 3, the framework of centralized and decentralized collaborative navigation framework based on odometer/vision red fusion is given, and the SLAM algorithms based on vision are presented. In Sections 4 and 5, the centralized and decentralized odometer/vision collaborative navigation algorithms are derived, respectively. In Section 6, different simulation experiments are designed to verify the effectiveness of the algorithms.
2. Multisource Information Fusion Collaborative Navigation System Model
2.1. Motion Robot Model
Suppose that the mobile robot moves according to the circular arc trajectory. Figure 1 shows the motion of a single mobile robot (see Figure 1).
2.1.1. Odometry Measurement Model
According to the principle of odometer motion calculation [16], it is assumed that the starting position of the mobile robot is , after of movement to the end. The end position is , and the trajectory can be regarded as a circular arc with radius . In this case, the pose of the robot (in discrete time) is given bywhere are increment of pulse number of odometer left and right wheel encoder. is the distance between the right wheel and the left wheel of a mobile robot. are the resolution parameters of left and right wheel encoder. The pose increment of sampling period is obtained by solving the above equations. The discrete motion equation of the mobile robot is further obtained by
2.1.2. Odometer Error Model
There are systematic errors and random errors in odometer based positioning [17]. In the simulation experiment, the influence of random error is mainly considered. The corresponding covariance matrix is given by
2.2. Lidar Environmental Detection Model
As the main module to realize the SLAM, environment detection module mainly obtains the surrounding environment information through the external sensing sensor.
2.2.1. Lidar Scanning Model
The twodimensional lidar outputs the point cloud data from the surrounding environment and contains the angular information. The scanning schematic of the twodimensional lidar is shown in Figure 2 (see Figure 2).
As shown in Figure 2, the lidar is scanned in the sensor coordinate system along the scanning direction, with a fixed scanning angle resolution. The measured point is detected in the th direction. Denote as the angle between this direction and the positive direction of the axis, and can be obtained by the resolution of the scanning angle with the inherent. A frame of point cloud data is obtained after a scan, which can be recorded as a set , where is the number of scanned data points in this frame of data, is the coordinate of the measured point in the polar coordinate system. And the final point cloud data set is obtained too.
2.2.2. Lidar Error Model
In the simulation experiment, the above error can be simplified to Gaussian white noise along the and directions in the carrier coordinate system of the mobile robot [18]. It is assumed that the positioning errors are in the two independent directions, so the corresponding covariance matrix is given bywhere and represent the variance along the and directions of the observation in the carrier coordinate system of the mobile robot, respectively.
2.3. UWB Relative Measurement Model
The relative measurement module is an important part of the cooperative navigation network in which nodes relate to the state information of the surrounding nodes. UWB (Ultra Wide Band) can measure relative distance and lidar can measure relative position which are selected as research objects in this paper. Because the scanning model of lidar has been described in the previous section, this section only establishes the corresponding ranging model and error model.
2.3.1. UWB Ranging Model
At present, there are two main ranging methods, that is, twoway ranging (TWR) method and symmetric doublesided twoway ranging (SDSTWR) method. SDSTWR method is greatly improved compared with TWR method in terms of ranging error caused by clock synchronization. Therefore, SDSTWR method is often used for distance measurement [19] (see Figure 3).
The distance expression between transceiver A and transceiver B can be obtained from Figure 3:where is the distance between the two tested transceivers. is the time interval between the first signal and the received signal of transceiver A. We can get in the same way.
2.3.2. UWB Error Model
One of the main sources of error in UWB ranging is the error caused by the clock synchronization of the two transceivers [20]:where and are the ratio of transceivers A and B between the actual and the expected frequency.
2.4. SLAM Model Based on Lidar Measurement
SLAM is widely used in the field of mobile robot navigation and positioning [21]. It is the key technology to solve the mapping problem at the same time, and it also provides a new idea for solving the path planning problem. SLAM technology mainly aims at the unknown location of the mobile robot. The environmental map is built in the form of an increasing perception of the surrounding strange environment according to the external sensor. At the same time, the state estimation of the mobile robot itself is obtained by using the built map information.
The SLAM localization technology is the key technology to realize the autonomous positioning of the mobile robot under the unknown environment. It is of great research value to realized more accurate navigation and localization under the condition of reducing the limitation of moving robot (such as no GPS) [22]. By applying the technology to the framework of the collaborative navigation system of the mobile robot, the navigation performance of the whole collaborative navigation system can be effectively improved (see Figure 4).
It can be seen from Figure 4 that a mobile robot’s localization using a SLAM navigation system is essentially a process, which is continuous estimation to approximate the true value [23]. The triangle in the graph represents the mobile robot and the circle represents the observed landmark, where the grey circle represents the estimated landmark. The solid line connecting the triangle represents the real trajectory, and the dotted line is the estimated trajectory (see Figure 4).
3. Collaborative Navigation Framework Based on Odometer/Vision
When nodes in collaborative navigation network participate in collaborative navigation, different data fusion algorithms can be used for data fusion, which is obtained multisource heterogeneous sensors of different nodes. There are two data fusion algorithms generally used, centralized and decentralized data fusion algorithms [24].
3.1. Framework of Collaborative Navigation Algorithm
In this subsection, a corresponding data fusion framework of the centralized collaborative navigation algorithm is designed for the general odometer/vision model, and a data fusion framework of decentralized collaborative navigation algorithm is designed in the same way.
In the centralized collaborative navigation structure, the measured data obtained by each node are concentrated in a data fusion center for data fusion. In a distributed collaborative navigation structure, each node shares some information with other nodes while processing its own sensor data (see Figure 5).
(a)
(b)
According to the odometer/vision collaborative navigation model, we use the most common EKF algorithms to simulate centralized and decentralized cooperative navigation. A centralized location algorithm (CL) and a decentralized location algorithm (DCL) are designed.
As shown in Figure 6, in the CL algorithm corresponding to the centralized cooperative navigation structure, each node sends the information obtained by the sensor itself to the central server and realizes the data fusion through the EKF algorithm. Its state vector is a set of state vectors of each node, which is updated with the principle of track reckoning. After that, the measurement information obtained after data association in the SLAM process and the relative measurement information between nodes are selected. CL algorithm gathers the state and covariance information of all nodes, corrects them uniformly, and sends the corrected estimation results to each node. Because of participating in the joint measurement update process, the state information of each node is related to each other after the first update (see Figure 6).
This correlation is reasonably estimated based on the CL algorithm, and the task of data fusion is distributed to each node; then the DCL algorithm is proposed accordingly. Since the position of each node in the algorithm is equivalent, only one node needs to be discussed (see Figure 7).In order to avoid overoptimal estimation to the greatest extent, the DCL algorithm in this paper introduces the concept of separating covariance crossover filter. The covariance of each node is divided into correlation term and irrelevant term, and the time update process is basically consistent with the time update process of a single node. The measurement update process will take two steps. Firstly, according to the measurement information of the SLAM navigation system, the state estimation results are obtained by propagating the state and integrating the related information of the auxiliary navigation system. Then, the state information sent by the adjacent nodes and the relative measurement information between the nodes can obtain another state estimation of the node. Here, relevant state information sent to neighboring nodes is used to update local maps (see Figure 7).
3.2. SLAM Algorithms Based on Vision
3.2.1. Landmark State Estimation Algorithm
The key of SLAM navigation algorithm lies in the process of data association. The positioning process of this SLAM navigation system is essentially a process, which is continuous estimation to approximate the true value. This kind of probability estimation problem is usually solved by introducing appropriate filter. The most common is the EKF algorithm (see Figure 8).
Because of the high sampling frequency of the odometer selected in this paper, the lidar also has the advantages of high precision and high reliability; the EKF algorithm with better realtime performance is selected. The state estimation process of landmark information in SLAM based on EKF is described below. The observation equation of the feature information obtained by lidar is as follows:where is state vector of landmark at time and is state vector of mobile robot at time. is measurement noise. Its variance matrix is , which can be denoted as . Since the landmarks are static, the state estimation of time landmark can be regarded as a priori estimation of time landmark state. The measurement update process based on EKF is as follows: Step 1: calculating the innovation and the filter gain: Step 2: updating the state estimation and the corresponding covariance:where is the covariance matrix for state estimation of landmark at time and is the measurement matrix at time.
Remark 1. Any observed landmark information can be position corrected by the above method, and it is noted that such position correction is limited to the landmark in the local map observed at the time.
3.2.2. Data Association Algorithm
In a SLAM navigation system, the process of data association is an important prerequisite for state estimation. Incorrect data association is likely to lead to serious deviation of estimation results [25, 26].
At present, there are two data association algorithms commonly used in SLAM technology, that is, the nearest neighbor data association algorithm (NN, nearest neighbor) [27] and joint compatibility body & body data association algorithm (JCBB) [28]. The NN algorithm has less computation, but it is easy to form wrong data association when the density of feature information is large, which leads to the divergence of SLAM results, so it is only suitable for the environment where the density of feature information is small and the system error is small. JCBB is the improvement of the NN algorithm, which extends the association of single features in the NN to all observed feature information, which is more binding and more reliable. The JCBB algorithm can obtain more credible association assumptions than the NN algorithm and exclude some wrong association assumptions. However, the computation amount is obviously increased, which to some extent affects the realtime performance of SLAM navigation system.
To ensure the accuracy of data association in the process of the SLAM, reduce the amount of computation as much as possible, and enhance the realtime performance of SLAM algorithm, this subsection describes an optimized data association algorithm. The classification method mentioned in [29] is used to divide the related feature information set; finally the appropriate feature information set in the local map and the preprocessed observation feature information set are selected to form the association space.
First, the collection of feature information in the local map is divided as follows:where is the relative distance between the feature information of the local map and other feature information .
Then, the observation feature information set is preprocessed and divided. In the actual navigation process, the observation feature information obtained by lidar contains noise information. The purpose of preprocessing is to filter out some noise information, improve the accuracy of data association, and reduce the amount of computation at the same time. The judgment process is as follows:where is the threshold, which is determined by the performance of the laser sensor. When the relative distance between the two observation feature information is less than the threshold, the observation feature information is considered to be the feature point; otherwise the noise point does not participate in the subsequent calculation.
When the set is divided, the set of observed feature information is sorted according to the order of observation. Based on the process of the local map feature information set above, the subset is divided in turn, and all points are not involved in the division repeatedly.
Finally, we select the appropriate association set to execute the data association algorithm. The subset of feature information of each local map and the subset of observed feature information at the current time are joint compatibility test, and the feature information with the best test results is selected to form a new subset as the data association object.
4. Centralized Collaborative Navigation Algorithm
4.1. Time Update
First of all, the state space model should be established. The state vector of a single mobile robot with three degrees of freedom contains position and heading angle information. Suppose the number of nodes is , the state space of collaborative navigation system in centralized framework contains the state vectors of all mobile robots in the group, and the state vector of mobile robot is and the state of system is . Then the state space equation of the system can be expressed as follows:where the function describes the kinematic characteristics of the mobile robot, represents the input required by the mobile robot to calculate the track at time , is the system noise, and .
It is assumed that the motion of any node is not affected by any other node, and each node moves independently without being controlled by other nodes. Therefore, the state transfer matrix for centralized collaborative positioning is given bywhere and are the Jacobian matrices of function for state vectors and control inputs, respectively. The system noise variance matrix of collaborative navigation system in centralized framework is as follows:where and is the covariance matrix for controlling input. Then, the time update process of collaborative navigation system in centralized framework can be deduced:where
4.2. SingleNode Measurement Update
In this section, the measurement updating process involving only one node in the centralized framework is described. The aided navigation system selected is SLAM navigation system, which integrates the landmark information of the surrounding environment measured by lidar. In this paper, a measurement model based on this navigation system is built, and the process of measurement updating based on EKF is described.
4.2.1. Measurement Model Based on SLAM
The measurement model based on SLAM is the measurement model after data association. In this paper, the position information of landmarks obtained by lidar is taken as the observation equation.where is position information for landmarks obtained by lidar. is the coordinates of landmarks in the world coordinate system. is the state of the mobile robot at the time of . is the measurement noise and its variance matrix is , which can be denoted as . After linearization and state extension, the observation equations of the whole system can be obtained:whereand is Jacobian matrices of function .
4.2.2. Measurement and Update Based on EKF
Combined with the basic principle of Kalman filter, the measurement and update process of the aided navigation system for a single node can be obtained as follows: Step 1: calculating the innovation and the filter gain: Step 2: updating the state estimation and the corresponding covariance:
4.3. Relative Measurement Update among Nodes
The standard observation model can be divided into two types: the measurement model based on the relative distance and the measurement model based on the relative position.
4.3.1. Measurement Model Based on Relative Distance
The observation of mobile robot to mobile robot at time can be denoted by ; then the observation equation is given bywhere is the measurement noise, its variance matrix is , which can be denoted as , and is the variance for UWB ranging.
After linearization and state extension, the observation equations of the whole system can be obtained:whereand and are Jacobian matrices of function , respectively.
4.3.2. Measurement Model Based on Relative Position
Using lidar as the sensor to realize the relative observation among nodes can be divided into two kinds: direct method and indirect method. The direct method is to measure the relative position between the two nodes directly; the indirect method is to use lidar to observe the nearest landmark between the two nodes. The relative position between the two nodes is obtained by correlation calculation.
The state of mobile robot is denoted by , at time and the state of mobile robot is denoted by . The coordinates of landmark adjacent to mobile robot in the world coordinate system are , the coordinates in the mobile robot coordinate system are , the coordinates of landmark adjacent to mobile robot in the world coordinate system are , and the coordinates in the coordinate system of mobile robot are . The specific solution process of the indirect method is as follows (see Figure 9):when mobile robot observe mobile robot at time, the coordinates of mobile robot in the mobile robot coordinate system as the observation. The observation equations are as follows:where is the measurement noise, its variance matrix is , which can be denoted as , and is the coordinate of mobile robot in the coordinate system of mobile robot at time.
4.3.3. Measurement Update Based on EKF
Similarly, we can finally get the measurement update process for the relative observation between nodes.
5. Decentralized Collaborative Navigation Algorithm
The state and covariance information of each node under the decentralized collaborative navigation algorithm is, respectively, calculated. In order to avoid overoptimal estimation to the maximum extent, the concept of the covariance intersection filter is introduced and the covariance of each node is divided into related and irrelevant items.
5.1. Covariance Intersection Filter
Given the state estimation vector and corresponding covariance matrix , assuming that is the covariance of the error between the state estimate and the state real value , it can be expressed as follows:
Consistency is a characteristic of the covariance matrix of the estimation [30]. When the covariance matrix of the state estimation is not less than the real covariance, it is said that the estimation satisfies the consistency, that is, no overoptimal estimation is produced. Suppose two state estimates and are independent and satisfy the consistency, the corresponding covariances are and . If there is a correlation between the two estimates, the Kalman filter may produce inconsistent results; in other words, it leads to overoptimal estimation:where the covariance corresponding to two state estimates and is and , respectively. and are the correlation covariance components corresponding to the maximum correlation between the two state estimates. and are independent covariance components corresponding to absolute independence of two state estimates, within interval . It is an optimization parameter that minimizes the covariance after fusion, and the in this interval can ensure the consistency of the fusion results.
5.2. Time Update
Before describing the time update process of DCL algorithm, it is necessary to decompose the state information of the system in the framework of centralized collaborative navigation, which can be expressed aswhere is the set of states under the centralized collaborative navigation framework, and and are the state space and the corresponding covariance matrix under the centralized collaborative navigation framework, respectively.
The state propagation process under the decentralized collaborative navigation framework is the state propagation process of a single node, and the propagation process of covariance can be expressed aswhere is the onestep prediction covariance matrix of the mobile robot at time and is the covariance. The independent covariance components of the matrix, and , are Jacobian matrices of function for state vector and control input and is the error matrix of the control input.
5.3. Single Node Measurement Update
The measurement and updating process of a single node only involves the aided navigation system of a single node, so there is no need to estimate the correlation, that is, the process of saving formulas (28) and (29). Similar to the single node measurement update process in centralized collaborative navigation, the single node measurement update process in distributed collaborative navigation can be expressed as Step 1: calculate the innovation and the filtering gain: Step 2: update the state estimation and the corresponding covariance:
5.4. Collaborative Measurement Update among Nodes
In the framework of decentralized collaborative navigation, the state estimation results of a single node aided navigation system and the state estimation results based on information sharing among nodes are integrated in the process of internode collaborative measurement updating, and the corrected state information is derived.
For the decentralized collaborative navigation framework, any node can clearly estimate the state of other nodes. In order to save the communication cost and reduce the computation of a single mobile robot platform, this paper sets up that the information exchange among the mobile robots is only taking place between the two adjacent mobile robot nodes.
Assuming that the mobile robot performs relative observations of the mobile robot at the time and shares its own state and covariance information with the mobile robot , the state of the mobile robot can be clearly expressed with information received state of the mobile robot and the relative measurement information between the two nodes:where is the partial state estimation of the mobile robot obtained by the information sharing between the mobile robot and the mobile robot at time, is the state vector shared by the mobile robot in the direction of the mobile robot , and is the relative measurement information of the two nodes in the coordinate system of the mobile robot.
If there is a direct relative observation between the two nodes, the relative measurement information can be obtained directly by the sensor that carries on the relative observation. If the relative observation between the two nodes is with the help of the indirect relative observation of the surrounding landmark information, then the relative measurement information needs to be solved to a certain extent, and the concrete solution method can be combined (25) and then converted into the mobile robot.
Finally, based on the principle of covariance intersection filter, the updating process of collaborative measurement among nodes in the framework of decentralized collaborative navigation can be obtained.
6. Simulation Results
6.1. Simulated Experimental Environment
In this section, the nodes of the mobile robot network involved in collaborative navigation are 3. Assuming that the area of the moving environment is when the mobile robot population group works together, each node in the environment is assigned to an initial position, and each node can follow random trajectory motion in this area. Assuming that all nodes follow the same simulation trajectory, only the initial position is different. The maximum speed of the mobile robot in the straight line is and the angular velocity at the bend is . It is assumed that the environment around the simulated trajectory of this rectangle can be extracted by lidar scanning to 88 landmarks (environmental feature points) for the SLAMassisted navigation system (see Figure 10).
During this simulation, mobile robots as carriers can carry different types of sensors, including odometers, UWB, and lidar. Suitable sensors are selected according to the requirements of positioning accuracy, among which Time Domain P410 UWB sensors are used to measure the relative distance, and lidar is selected from LMS291 series 2D lidar produced by a German company. Based on the relevant parameters of these sensors, which are shown in Table 1, a simulation model for mobile robots carrying different types of sensors is built using MATLAB.

6.2. Relative Measurement Aided Odometer Collaborative Navigation
In the experiment, all three mobile robots are equipped with odometer capable of moving monitoring, UWB capable of measuring relative distance, or lidar capable of measuring relative position.
From Figure 11, it can be seen that the collaborative navigation system which realizes relative information sharing has significant advantages over the case of not sharing any information in positioning accuracy. Besides, the improvement of group navigation performance of mobile robots is affected by the type of shared relative information. When the relative position information is shared, the growth of the error can be effectively limited; relatively speaking, when the relative distance information is shared, the position error is still growing slowly, which only reduces the growth rate of the error (see Figure 11).
(a)
(b)
The analysis shows that the relative distance information is weakly constrained, so sharing this information cannot effectively realize the navigation and localization of mobile robots. In contrast, the sharing of relative position information includes the solution to mobile robot navigation and information. Information accuracy is significantly improved. It can even be increased by more than at some time. This difference is more obvious in the angle error diagram (see Figure 11).
In this paper, two observation methods, direct relative measurement and indirect relative measurement, are mentioned in the description of the measurement model based on relative position. Based on this experimental scene, scenario I is three mobile robots to observe the relative position information directly through lidar. Scenario II is three mobile robots to extract the surrounding landmark information through lidar, and based on this solution, the relative position information is calculated. In the above experimental scenario, the centralized collaborative navigation algorithm is used to solve the navigation problem. The two relative position measurement methods are compared through the above simulation experimental scenarios. The comparison results are shown in Figure 12 (see Figure 12).
(a)
(b)
Through Figure 12, it is clear that the collaborative navigation and positioning accuracy of relative position measurement using the direct method are better than those of indirect method. However, cost calculation cannot be ignored while navigation performance is considered. The direct method requires that the measurement range of lidar includes the activity range of the whole mobile robot population group while the measurement range of lidar required by indirect method only needs to include the surrounding landmarks. This greatly reduces the cost calculation. Considering that the accuracy of collaborative navigation and positioning using the two relative position measurement methods is not much different, it is obvious that the indirect method is more suitable for practical application (see Figure 12).
The difference of the decentralized collaborative navigation framework compared with the centralized collaborative navigation framework is that the correlation among the different node states is accurately calculated in the centralized collaborative navigation framework, and this correlation cannot be used in the decentralized collaborative navigation framework. In order to better reflect the impact of this correlation, the navigation errors of the two collaborative navigation algorithms in the odometer collaborative navigation system are shown in Figure 13 (see Figure 13).
(a)
(b)
To compare the two algorithms, 20 experiments are carried out in this paper, and the root mean square error (RMS) and formulas of the two collaborative navigation algorithms are calculated as shown in the following formula:where is the total number of samples, is the actual value, and is the estimated value. RMS parameters for the odometer collaborative navigation are shown in Table 2.
As can be seen from Figure 13 and Table 2, the error of the centralized collaborative navigation algorithm is smaller than that of the decentralized collaborative navigation algorithm. This can be predicted because the correlation among node states in the centralized collaborative navigation algorithm can be calculated accurately, which is estimated in the decentralized collaborative navigation algorithm. However, the improved accuracy of the navigation is at the expense of high computing power and high quality data communication. Therefore, although the performance of centralized collaborative navigation framework is better than that of distributed collaborative navigation framework, the centralized collaborative navigation framework is not applicable in some practical scenarios (see Figure 13).

6.3. Odometer/Vision SLAM Collaborative Navigation
In the odometer/vision collaborative navigation model, scenario I is designed that all the mobile robots are equipped with an odometer which can monitor the motion. One of the mobile robots is equipped with SLAMaided navigation system and can work properly.
Firstly, the mobile robot with SLAMaided navigation system is studied, and it only runs its own integrated navigation algorithm without sharing the relative position information. Using the centralized collaborative navigation algorithm, the navigation error of nodes with SLAMaided navigation system is shown in Figure 14 (see Figure 14).
(a)
(b)
Figure 14 fully verifies the correctness of a centralized collaborative navigation algorithm based on the odometer/vision collaborative navigation model. The SLAMassisted navigation system is based on the relative observation. The position estimation of the node itself and the position estimation of the landmark have the error accumulation, but the association algorithm of the SLAM is combined, the centralized collaborative navigation algorithm makes the position estimation of the landmark closer to the real value while the positioning accuracy of the mobile robot is improved, the data association is more reliable, further correcting the state estimation of the mobile robot itself. Therefore, the algorithm is very obvious to improve the navigation accuracy of mobile robot (see Figure 14).
Then, the mobile robots without the SLAMaided navigation system in the experiment are studied. In order to fully reflect the influence of the SLAMaided navigation information on the navigation performance of other nodes, Scenario II is designed that all mobile robots are equipped with odometer which can monitor the motion, and two of them are equipped with SLAMaided navigation system and can work properly. The navigation error of other nodes without SLAMaided navigation system is shown in Figure 15 (see Figure 15).
(a)
(b)
As shown in Figure 15, the mobile robot with SLAMaided navigation system performs loopback detection in about 320 seconds and data associated with the local map created at the initial location, thus eliminating most of the accumulated errors. The unique superior performance of the SLAMaided navigation system is transmitted to other nodes in the group through the process of information sharing in the process of collaborative navigation, so that it can also eliminate most of the accumulated errors in the vicinity of the time, which is an important advantage of the collaborative navigation system (see Figure 15).
To verify the NN algorithm, JBCC algorithm, and the optimized data association algorithm on the navigation performance of nodes without SLAMaided navigation system, the experimental scene is designed that all mobile robots are equipped with odometer which can carry out motion monitoring. One of the mobile robots is equipped with SLAMaided navigation system and can work normally, and the CL algorithm is run. The navigation error of nodes without SLAMaided navigation system is shown in Figure 16 (see Figure 16).
(a)
(b)
The performance of the centralized collaborative navigation algorithm under the three SLAM data association algorithms is shown in Table 3.
From Figure 16 and Table 3, it can be seen that the navigation performance of nodes without SLAMaided navigation system is affected by the SLAM data association algorithm used by nodes carrying SLAMaided navigation system. Running the NN algorithm, the matching accuracy of feature information is not high, so that the navigation accuracy is poor. Running the JCBB algorithm, the correct rate of data association is the highest, but the operation time is the longest. Running optimization data association algorithm, the navigation accuracy is slightly reduced, but the operation time is less, which can meet the realtime requirements (see Figure 16).

In this subsection, to compare the performance of collaborative navigation systems in odometer/vision collaborative navigation systems under centralized and decentralized collaborative navigation algorithms, we run the CL and DCL algorithms separately under the experimental scenario I. The navigation errors of the two collaborative navigation algorithms are compared as shown in Figure 17. Under the experimental scenario II of this subsection, we run the CL algorithm and the DCL algorithm, respectively. The navigation errors of the two collaborative navigation algorithms are compared as shown in Figure 18 (see Figures 17 and 18).
(a)
(b)
(a)
(b)
After 20 experiments, the RMS parameters of collaborative navigation with single node SLAM information are shown in Table 4.
The RMS parameters of the coordinated navigation with the fused multinode SLAM information are shown in Table 5.
As can be seen from Figures 17 and 18 in conjunction with Tables 4 and 5, in the odometer/vision collaborative navigation system, the error of the centralized collaborative navigation algorithm is less than the distributed collaborative navigation algorithm; after the landmark information collected by the single node or the multinode is fused, there is a small gap between the two algorithms. In other words, the distributed collaborative navigation algorithm based on the odometer/vision collaborative navigation model can well estimate the correlation of the internode information (see Figures 17 and 18).


Considering the high requirement of the centralized collaborative navigation algorithm to the computing power and the communication level, the application scenarios of the two algorithms are analyzed in combination with the abovementioned collaborative navigation experiment: the centralized collaborative navigation algorithm is suitable for the case that there are few nodes and the nodes are not equipped with additional aided navigation system, the decentralized collaborative navigation algorithm is suitable for the large number of nodes and the large amount of information shared, and some nodes are equipped with additional aided navigation systems, especially in the case of SLAMaided navigation system.
7. Conclusion
In order to improve the performance of cooperative navigation system, a multirobot collaborative navigation algorithm based on odometer/vision multisource information fusion is studied. On the basis of establishing the multisource information fusion collaborative navigation system model, the centralized collaborative navigation of odometer/vision fusion, the decentralized collaborative navigation framework, and the visionbased SLAM are given, and the centralized and decentralized odometer/vision collaborative navigation algorithms are derived, respectively. The effectiveness of the proposed algorithm is verified by the simulation experiments, which has some theoretical value and application value in high performance collaborative navigation applications.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
References
 K. N. Olivier, D. E. Griffith, G. Eagle et al., “Randomized trial of liposomal amikacin for inhalation in nontuberculous mycobacterial lung disease,” American Journal of Respiratory and Critical Care Medicine, vol. 195, no. 6, pp. 814–823, 2017. View at: Publisher Site  Google Scholar
 M. Schwarz, M. Beul, D. Droeschel et al., “DRC team nimbro rescue: perception and control for centaurlike mobile manipulation robot momaro,” Springer Tracts in Advanced Robotics, Springer, Berlin, Germany, 2018. View at: Publisher Site  Google Scholar
 M. Long, H. Su, and B. Liu, “Group controllability of twotimescale discretetime multiagent systems,” Journal of the Franklin Institute, vol. 357, no. 6, pp. 3524–3540, 2020. View at: Publisher Site  Google Scholar
 T. Fukuda, S. Nakagawa, Y. Kawauchi, and M. Buss, “Structure decision method for self organising robots based on cell structuresCEBOT,” in Proceedings of the 1989 International Conference on Robotics and Automation, Scottsdale, AZ, USA, May 1989. View at: Publisher Site  Google Scholar
 H. Asama, A. Matsumoto, and Y. Ishida, “Design of an autonomous and distributed robot system: actress,” in Proceedings of the IEEE/RSJ International Workshop on Intelligent Robots and Systems “(IROS ’89)” the Autonomous Mobile Robots and its Applications, IEEE, Tsukuba, Japan, September 1989. View at: Publisher Site  Google Scholar
 J. Zhou, Y. Lv, G. Wen, X. Wu, and M. Cai, “Threedimensional cooperative guidance law design for simultaneous attack with multiple missiles against a maneuvering target,” in Proceedings of the 2018 IEEE CSAA Guidance, Navigation and Control Conference (CGNCC), IEEE, Xiamen, China, August 2018. View at: Publisher Site  Google Scholar
 H. Su, J. Zhang, and Z. Zeng, “Formationcontainment control of multirobot systems under a stochastic sampling mechanism,” Science China Technological Sciences, vol. 63, no. 6, pp. 1025–1034, 2020. View at: Publisher Site  Google Scholar
 H. Park and S. Hutchinson, “A distributed robust convergence algorithm for multirobot systems in the presence of faulty robots,” in Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2980–2985, IEEE, Hamburg, Germany, SeptemberOctober 2015. View at: Publisher Site  Google Scholar
 K. Petersen and R. Nagpal, “Complex design by simple robots: a collective embodied intelligence approach to construction,” Architectural Design, vol. 87, no. 4, pp. 44–49, 2017. View at: Publisher Site  Google Scholar
 L. Chaimowicz, T. Sugar, V. Kumar, and M. F. M. Campos, “An architecture for tightly coupled multirobot cooperation,” in Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. no. 01CH37164), vol. 3, pp. 2992–2997, IEEE, Seoul, Korea, May 2001. View at: Publisher Site  Google Scholar
 H.X. Hu, G. Chen, and G. Wen, “Eventtriggered control on quasiaverage consensus in the cooperationcompetition network,” in Proceedings of the IECON 2018—44th Annual Conference of the IEEE Industrial Electronics Society, IEEE, Washington, DC, USA, October 2018. View at: Publisher Site  Google Scholar
 A. Amanatiadis, K. Charalampous, I. Kostavelis et al., “The avert project: autonomous vehicle emergency recovery tool,” in Proceedings of the 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 1–5, IEEE, Linkoping, Sweden, October 2013. View at: Publisher Site  Google Scholar
 R. Kurazume, S. Hirose, T. Iwasaki, S. Nagata, and N. Sashida, “Study on cooperative positioning system,” in Proceedings of the IEEE International Conference on Robotics and Automation, Minneapolis, MN, USA, August 1996. View at: Publisher Site  Google Scholar
 Z. Fu, Y. Zhao, and G. Wen, “Distributed continuoustime optimization in multiagent networks with undirected topology,” in Proceedings of the 2019 IEEE 15th International Conference on Control and Automation (ICCA), IEEE, Edinburgh, UK, November 2019. View at: Publisher Site  Google Scholar
 Y. Zhao, Y. Liu, and G. Wen, “Finitetime average estimation for multiple double integrators with unknown bounded inputs,” in Proceedings of the 2018 33rd Youth Academic Annual Conference of Chinese Association of Automation (YAC), IEEE, Nanjing, China, May 2018. View at: Publisher Site  Google Scholar
 S. Mao, “Mobile robot localization in indoor environment,” Zhejiang University, Hangzhou, China, 2016, Ph.D. dissertation. View at: Google Scholar
 J. Yang, “Analysis approach to odometric nonsystematic error uncertainty for mobile robots,” Chinese Journal of Mechanical Engineering, vol. 44, no. 8, pp. 7–12, 2008. View at: Publisher Site  Google Scholar
 J. Kang, F. Zhang, and X. Qu, Angle Measuring Error Analysis of Coordinate Measuring System of Laser Radar, vol. 40, no. 6, 2016.
 J. Zhang, P. Orlik, Z. Sahinoglu, A. Molisch, and P. Kinney, “UWB systems for wireless sensor networks,” Proceedings of the IEEE, vol. 97, no. 2, pp. 313–331. View at: Google Scholar
 D. Kaushal and T. Shanmuganantham, “Design of a compact and novel microstrip patch antenna for multiband satellite applications,” Materials Today: Proceedings, vol. 5, no. 10, pp. 21 175–21 182, 2018. View at: Publisher Site  Google Scholar
 J. Xiucai, “Data association problem for simultaneous localization and mapping of mobile robots,” National University of Defense Technology, Changsha, China, 2008, Ph.D. dissertation. View at: Google Scholar
 Z. Yuan, “Research of mobile robot’s slam based on binocular vision,” Tianjin University of Technology, Tianjin, China, 2016, Master’s thesis. View at: Google Scholar
 F. Bellavia, M. Fanfani, F. Pazzaglia, and C. Colombo, “Robust selective stereo slam without loop closure and bundle adjustment,” in Proceedings of the International Conference on Image Analysis and Processing, pp. 462–471, Springer, Naples, Italy, 2013. View at: Publisher Site  Google Scholar
 H. Fourati, Multisensor Data Fusion: From Algorithms and Architectural Design to Applications, CRC Press, Boca Raton, FL, USA, 2015.
 S. Jia, X. Yin, and X. Li, “Mobile robot parallel pfslam based on openmp,” in Proceedings of the 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 508–513, IEEE, Guangzhou, China, December 2012. View at: Publisher Site  Google Scholar
 W. Zhou, E. Shiju, Z. Cao, and Y. Dong, “Review of slam data association study,” in Proceedings of the 2016 International Conference on Sensor Network and Computer Engineering, Atlantis Press, Shanghai, China, 2016. View at: Publisher Site  Google Scholar
 R. Singer and R. Sea, “A new filter for optimal tracking in dense multitarget environments,” in Proceedings of the Annual Allerton Conference on Circuit and System Theory, pp. 201–211, Monticello, MN, USA, 1972. View at: Google Scholar
 J. Neira and J. D. Tardós, “Data association in stochastic mapping using the joint compatibility test,” IEEE Transactions on Robotics and Automation, vol. 17, no. 6, pp. 890–897, 2001. View at: Publisher Site  Google Scholar
 L. Yanju, X. Yufeng, G. Song, H. Xi, and G. Zhengping, “Research on data association in slam based laser sensor,” Microcomputer & Its Application, vol. 36, no. 2, pp. 78–82, 2017. View at: Google Scholar
 O. Hlinka, O. Slučiak, F. Hlawatsch, and M. Rupp, “Distributed data fusion using iterative covariance intersection,” in Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1861–1865, IEEE, Florence, Italy, May 2014. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2020 Guohua Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.