Abstract

The first application of utilizing unique information-fusion SLAM (IF-SLAM) methods is developed for mobile robots performing simultaneous localization and mapping (SLAM) adapting to search and rescue (SAR) environments in this paper. Several fusion approaches, parallel measurements filtering, exploration trajectories fusing, and combination sensors’ measurements and mobile robots’ trajectories, are proposed. The novel integration particle filter (IPF) and optimal improved EKF (IEKF) algorithms are derived for information-fusion systems to perform SLAM task in SAR scenarios. The information-fusion architecture consists of multirobots and multisensors (MAM); multiple robots mount on-board laser range finder (LRF) sensors, localization sonars, gyro odometry, Kinect-sensor, RGB-D camera, and other proprioceptive sensors. This information-fusion SLAM (IF-SLAM) is compared with conventional methods, which indicates that fusion trajectory is more consistent with estimated trajectories and real observation trajectories. The simulations and experiments of SLAM process are conducted in both cluttered indoor environment and outdoor collapsed unstructured scenario, and experimental results validate the effectiveness of the proposed information-fusion methods in improving SLAM performances adapting to SAR scenarios.

1. Introduction

Mobile robots perform simultaneous localization and mapping (SLAM) task in search and rescue (SAR) postdisaster scenarios, while building an exploration global map and localizing themselves with this map. Modeling decision on information-fusion was earlier put up by Torra and Narukawa [1], and parallel tracking and mapping (PTAM) strategy [2] and keyframes subblocks matching [3] were used for robotics mapping and localization. Murphy et al. [4] proposed that SLAM ability is necessary for mobile robots conducting SAR task. Motivating by the researches, we developed that information-fusion SLAM (IF-SLAM) of multirobots and multisensors (MAM) autonomously explores an SAR area without recourse to global positioning system (GPS). Multiple sensors are mounted on board multirobots. The information from several sources or from different times of the same sensor is integrated into one single datum to implement SLAM assignment for each robot. An image matching solution incorporates extraction of overlap regions and key frames extraction of overlap regions to rebuild through subblocks matching, and the imagine fusion of matching features transforms 3D object of interest in the real world into 2D image in global map. The multisensors integration block provides continuous online information about the mobile robots and its local environment.

The algorithms of integration particle filter (IPF) and optimal improved extended Kalman filter (IEKF) combine traditional methods of consistent Rao-Blackwellized (R-B) particle filters embedded in MAM IF-SLAM architecture. This architecture consists of multirobots and multisensors including laser range finder (LRF) sensors, localization sonars, gyro odometry, Kinect-sensor, RGB-D camera, and other proprioceptive sensors mounted on board multiple robots.

Mobile robots have a great potential to assist in the underground operations, searching ahead of rescue teams and reporting conditions that may be hazardous to the teams. Robots can become the invaluable tool. The SLAM challenge appears when a mobile robot attempts to build a map of cluttered SAR environment while simultaneously localizing itself relative to this map. Robotic SLAM in SAR environment confronts the difficulties on random-exploration trajectories. These difficulties include the relationships between the map accuracy relative to its steady-state motion behavior and a simultaneous estimate of both robot and landmark locations. Underground mine rescue confronts the same issues in the use of belays and tethers for operating down voids or on slopes.

Generally, mobile robots do not have the luxury of GPS localization while exploring underground [5]. Multiple robots on-board multisensors SAR SLAM combining measurements and trajectories has improved SLAM performances in object identification, area coverage, and loop-closure, which is necessary for robotic search and rescue tasks.

Multiple robots cooperatively acquired the surrounding’s information in wide research work [1, 69], and an integration strategy of different kinds of measurements was used in multimode sense of 3D perception. Pfingsthorn et al. [7] updated the local submaps through their graph-based structures, and made it be efficiently communicated among mobile robots’ sensors. Using a group of mobile robots equipped with sensors for source concentration measurement, the mobile robots team cooperatively estimates the gradient of the exploration field, moves to the objects of interest, and keeps a predefined formation in movement [8]. Atanasov et al. [9] developed an acquisition method of multisensors active information capturing the common characteristics of these scenarios and controlling mobile sensing systems to improve the accuracy and efficiency of collecting information autonomously.

We developed robotics SLAM of fusion data provided by multiple robots on-board multisensors, vision fusion of images, and integration of different kinds of measurements. The on-board proprioceptive sensors perceive the surroundings to supply various information and measurements, and then the mobile robot fuses the achieved data and the exploration trajectories for SLAM processing. Usually, the landmarks are difficult to assignment and to determine their poses in SAR disaster site, collapsed and cluttered. In the proposed methods, the mobile robots explore coverage SAR collapsed area and take relative observations of landmarks, and the sensors’ data fusion approaches made a few achieved landmarks localize maximally consistent with mobile robot’s pose.

According to Torra and Narukawa and Milisavljevic’s theory [1, 10], information-fusion is the processes which embedded the particular function mathematical algorithms in an integration architecture, which needs a special techniques for combining the information. Sensors and data fusion theory in [10] illustrated the strategy of fusion of motion targets trajectories and sensors’ measurements. Literature [2, 3] proposed a monocular single camera SLAM and keyframes localization approaches; however, these methods are not suitable for SAR SLAM in urgent postdisaster areas.

Inspired by the overview research means, we designed novelty multirobots and multisensors information-fusion SLAM (IF-SLAM) architecture, in which the keyframes and environmental data obtained from MAM architecture are processed by fusion strategies, and one single datum is output for performing SLAM. Multiple robots explore SAR postdisaster area and perform SLAM task, maintaining a graph with sensor observations stored in vertices and pose differences including uncertainty information stored in edges. Integration particle filter (IPF) and optimal improved extended Karman filter (IEKF) algorithms embedded MAM platform for improving motion trajectories accuracy, which is greatly adapting to SAR postdisaster environment. The proposed SAR SLAM methods of information-fusion employ hybrid map representations to fuse the strengths of topological maps and occupancy grids; these representations facilitated multirobots mapping.

The remainder parts are arranged as follows: in Section 2, we describe the information integration architecture. In Section 3, information-fusion algorithms are elaborated and derived, and the virtual robots conduct SLAM simulations with information-fusion method in Section 4. In Section 5, the experiments are performed employing Pioneer LX robot (Adept MobileRobots Pioneer 3, Mobile Robots Inc., http://www.mobilerobots.com) and crawler mobile robot in both indoor cluttered and outdoor collapsed scenarios, respectively. In Section 6, the results are discussed, and in Section 7, we summarized our conclusions and future work.

2. Information Integration Architecture

2.1. Multiple Robots

Mobile robots installed with multisensors and initial preset metric map have been developed. According to Saeedi et al. [11], multiple robots conduct mapping by fusion of two occupancy grid submaps; Deshpande et al. [6] proposed a robotic platform equipped with camera and various sensors for multimode 3D perception. In the designed architecture of multiple robots, SLAM is achieved through the process of mapping out an unknown environment and generating a consistent map. In general, single-robot SLAM is achieved through extended Kalman filter (EKF) data fusion; this approach is then extended to multirobots SLAM by using MobileRobot Operating System (MOS). The multirobots apply to conduct exploration missions and provide a perception of its surroundings for performing SLAM in the SAR environment.

2.2. Multisensors On-Board Robots

A multisensors robotic platform intended for deployment in environmental hazardous situations. Thrun et al. [12] equipped several LRFs to register consecutive scans; 2D/3D information of scans was acquired from pointed forward LRFs. Multisensors system was developed in for acquiring accurate environmental maps of SAR environments. The system is based on an autonomous robot (i.e., a Pioneer LX) installed with LRFs and sensors; other mobile robots (i.e., Amigo robots) are equipped with various sensors. As displayed in Figure 1, the designed multiple sensors (LRFs, gyros, sonars, and cameras) are on-board multirobots for integration information.

2.3. Integration Strategies

Fusion algorithms and SLAM programs are embedded in multirobots and multisensors architecture. Zhou and Roumeliotis [13] merged local submaps that are generated by different robots independently; a robots team cooperatively built a joint environmental map; Henry et al. [14] used single RGB-D camera to build dense 3D map, as the researcher; only a two-dimensional map is lack of any depth information of surrounding objects.

Locally consistent submaps are incapable of global consistency. An optimal algorithm for fusion of local submaps which are created is based on multiple robots and sensors. The fusion of the visual and shape information achieved from cameras and laser scans is used to construct dense 3D objects mapping. The rich visual features along with random sample consensus (RANSAC) verification are given fixed data, which is fused into the iterative closest point (ICP) optimization. To build consistent maps of SAR postdisaster environment, the ICP algorithm is used for estimating global correspondences and aligning robot exploration trajectories. With fusion algorithms by integration of local occupancy grid submaps, the global consistent map is formed. The integration operators are developed to combine measurements within an area and output a processed value . Represent integration operators as functions form:where indicate measurement inputs, which correspond to mathematical strategies used for information-fusion.

3. Information-Fusion Algorithms Adapting to SAR SLAM

3.1. Modeling Predefined

A map of environment can be parametrized as a set of spatially located landmarks, occupancy grids, surface terrain, and sensors’ measurements. The IF-SLAM builds a joint state which constitutes of the multiple robots poses and each landmark location, employing a huge state vector, which is arranged on the order of landmarks number maintained in the map, to compute the landmarks’ scaling. In SLAM process, the true locations of a mobile robot are never known or measured directly. The observations are employed to estimate between true locations of robot and landmark locations. The vectors’ sets are defined as follows.

The previous locations of mobile robot:

The previous step of control inputs:

The all landmarks’ set:

All landmark observations’ set:where is the location and orientation state vector. is the control vector, which drives the mobile robot state from of time to of time . is the vector of the th landmark location, which is time invariant. is the th landmark observation, which is corresponding with the mobile robot location. is the multiple landmarks observations.

In integration particle filters, a degree of convergence between the mobile robots locations based EKFs and the probabilistic localization and mapping is expressed aswhere the robot locations’ probability distribution is and is the probable distribution.

The robot searches an object in the current scanning by employing a greedy iterative algorithm with heuristic constraints, and this object is matched with the nearest unmatched object of the last scanning. Assuming the current list objects’ set , then the calculation time from the current location of mobile robot to each available landmark within SAR exploration area is represented in (7), and the measurement distances between mobile robot and the objects of interest are expressed as (8), respectively,where is the time difference between the current time and the time that the object was last measured. is the minimum distance from the robot’s current location to the boundary of an object .

Equations (2)–(8) establish the predefined models of mobile robots’ SLAM adapting to SAR environment.

3.2. IF-SLAM of Integration Particle Filter Algorithms

The on-board proprioceptive sensors incorporate laser measurement system (LMS), omnidirectional stereo camera, inertial measurement unit (IMU), and Kinect, which enable a mobile robot to build a map of the environment and to localize itself at the same time within this map. Olofsson et al. [15] discussed the data information and map fusion of two workspaces; the state space is estimated based on EKF and Rao-Blackwellized.

In robots IF-SLAM approach, a sequence of motion trajectories is constructed submaps by fusion of their on-board sensors data. Sensors’ information algorithms consider a minimum of robot modeling; the first is based on improved extended Kalman filter (IEKF), and another is based on the Rao-Blackwellized (R-B) particle filter; an integration particle (IPF) algorithm was combination of IEKF and R-B particle filters, the mobile robot motion in the probabilistic form:where is the robot’s pose state vector from (2), is the control vector, and represents Gauss white noise; the observation model and motion model and the prior and posterior distributions are modeled as the follows:and the margined joint distribution is as follows:where and , , and is sampled probability from state vector set ; is a linear part; is a nonlinear part; is a set of weighted particles; is evaluated particles weights.

3.3. Sensors’ Measurements Data Fusion

The measurements data fusion algorithm matches relative frame-frame poses of mobile robot independently. The authors in [1618] fuse stereo camera and inertial measurement units (IMUs) for robots estimation in GPS-denied large-scale unstructured environments, which enables the robot exploration using combination algorithms of stereo model-based outlier rejection and spatial-temporal filtering for 3D mapping and localization. In [19], the robot captures 3D scans of points of interest and combines these key points into a topological map.

Multisensors system fuses a stereo-camera sensor, inertial measurement units (IMU), and LRF odometry with an EKF algorithm to ensure robust, low-latency performance of SLAM. An initial metric map of SAR environment is preinstalled, and then this map is updated and reconstructed in real-time by fusing the multi-information achieved from their on-board sensors. A mobile robot explores in an unstructured postdisaster area while incrementally building a global consistent environmental map and locating itself in this map simultaneously. The points of interest are extracted from a group of LRFs. By fusing these interest points into a topological map, we obtain a valuable tool for exploration and localization in SAR cluttered environments. In Figure 2, three LRF sensors measurements data fusion compare each single-sensor trajectory with their fusion trajectory of measurements data.

The sensors’ measurements fusion makes the nodes or particles maximally consistent with the poses of the mobile robot, and the large errors are minimized or even eliminated. A graph-based SLAM builds a simplified estimation by extracting the sensors’ measurements. In a graph-based map, the nodes correspond to the poses of the robot at various points, and the edges represent the constraints between different connected poses. Such a graph-based map can be calculated by searching the spatial configuration of the nodes which are usually consistent with the data measurements modeled by the edges. A mobile robot performs perception autonomous exploration to reach goal positions in collapsed disaster scenarios by using fusion planar laser scans, stereo vision, and proprioceptive sensing system. As shown in Table 1, a measurements set corresponding to Figure 5 followed experiment is achieved from the mobile robot running SLAM in cluttered environment, which fuses multisensors of LRF, four pairs of sonars, and gyro odometry.

3.4. Trajectories Fusion

According to [20], R-B particle filters fused robot’s trajectory and local submap into a distribution over potential trajectories tracked by a particle filter and a distribution of potential map for each particle. Researchers in [21, 22] detected an object by incorporating the LRFs and vision system measurements information for Kalman filter and built a submap by combined constraint data association (DDCA) approach constructing and searching a correspondence graph. Literature [23] considered fusion structures of the centralized and decentralized; however, the former increased computation and the latter partially reduced the system accuracy.

While initially motivated by problems in data association and loop-closure, these methods have resulted in qualitatively different methods of describing the SLAM problem, focusing on trajectory estimation rather than landmark estimation. We combined the two methods with centralized trajectory and decentralized trajectories of each robot. A mobile robot explores the disaster unstructured area while estimating the robot trajectory and mapping the environment, offering higher accuracy and improving the computation speed.

Trajectories fusion is a method of integrating all trajectories achieved from different sensors into an estimated trajectory, which has improved the accuracy and efficiency of reaching the objects of interest. In Figure 3, (a) displays a fusion trajectory of three different LRF sensors’ information trajectories, in which the red solid line represents true trajectory and the blue dash dotted line indicates fusion trajectory by the proposed algorithms; (b) is sensor 1 trajectory error analysis relative to the mean value, (c) is sensor 2 trajectory error analysis, and (d) is sensor 3 trajectory error analysis, in which each red line represents the mean value, respectively. As shown in Table 2, the trajectory fusion can well match the true trajectory, and each single-sensor mean error is analyzed.

3.5. Error Analysis of Two Fusion Methods

In the proposed optimal IEKF and IPF algorithms, each particle performs to update on the observed landmarks as a simple mapping operation with known mobile robot pose. Optimal IEKF as the core of the pose filter results in a modular system to which new sensor measurements can easily be added. A set of criteria are used to assess the quality of 2D topological map segmentation. Based on spectral clustering of an incremental generalized Voronoi decomposition of discretized metric maps, Liu et al. [24] extracted sparse spatial information from the maps and built an environmental model which aims at simplifying exploration task for mobile robots; in [25] the ICP-based and the stereo-based T and R system on the respective data conducted offline processing.

To analyze the localization accuracy, we recorded the sensors data (laser point clouds and stereo images) of all runs during the experiment, comparing the proposed IPF algorithm with a state-of-the-art stereo-vision-based technique, and show that the novel approaches have greatly increased robustness to trajectory deviations and are less dependent on environmental conditions. In errors analysis, the error state vector is defined as follows: where is the errors in location; is the errors in velocity; is the errors in attitude; is sensors’ biases in acceleration; is the pose biases in attitude.

The mobile robot’s movement model is expressed asand the error measurements model is expressed aswhere , represent white Gaussian noise, respectively; is error measurements; is the predicted error states vector; is error measurements system matrix; is the observed system matrix. Algorithm 1 has listed the IPF algorithms procedure.

Algorithm: Integration particle filter (IPF) procedures.
(1) _ = noise covariance matrix, eqs. (12) (16).
(2) R_ = observation matrix eqs. (1) (9) (11) (15) (17).
__;
_Est1;
_Est2;
_Est3;
_Est123;
_Est321;
(3) Error_variance = Error_sum , eqs. (13) (14) (18).

Figure 4 shows two fusion methods performance by analyzing the errors relative to mean values: (a) is the fusion of trajectories of sensor 1, sensor 2, and sensor 3; (b) is the fusion of measurements data achieved from these different sensors, in which each red line represents error mean value, and blue curves represent the biases relative to error mean value.

As displayed in Table 2, the error mean value of measurements data fusion is smaller than that of trajectories fusion; that is, the measurements fusion demonstrates a better accurate performance in getting to the objects of interest. It also can be seen that multisensors fusion of two methods are much smaller than that of a single sensors, which demonstrates that information-fusion is highly valuable in improving robots’ SLAM performances for SAR environments.

4. SLAM Simulations with Fusion Information

The sensors’ IF-SLAM can make a robot truly autonomous exploration in an uncertain collapsed area, in which both the exploration trajectory and the positions of all landmarks are estimated in real-time; a robot can build a map of an environment and simultaneously use this map to calculate its location.

Some researchers [26] have written software to simulate and implement SLAM task in Matlab, C++, and java available online. For implementation of a single robot’s trajectory in particle filter SLAM, sampling the robot’s poses, the observed landmarks are updated and the poses are accurate; hence, the local submap for a single particle is managed by the accuracy of trajectory. All observed trajectories provide a probabilistic location model of a robot. A robust six-degree-of-freedom relative exploration combines the iterative closest point (ICP) registration algorithm and a noise adaptive Kalman filter. This approach fuses measurements from a laser scanner and an inertial measurement unit (IMU) in a closed-loop configuration. Aghili and Su [27] integrated ICPs and Kalman filter using LRFs and IMUs to provide real-time accurate pose estimation even when poor vision data fail converging ICPs. The authors [28] presented multisensors data retrieval (MSDR) to share the measurements and information, which grants asynchronous assess to fusion system from different robots.

According to the researches, we designed simulations as shown in Figures 5(a)5(f), in which a virtual mobile robot installed multisensors is running SLAM program with an information-fusion algorithm in a simulated SAR environment; in Figure 5(b), the red trajectories represent exploration appointed loop-closure area, the red image at the ahead of trajectory is a virtual robot, the white dot on the virtual robot represents gyro odometry, the blue band represents LRF sensor information, and the grey rays indicate sonars’ signal. The measurements of mobile robot’ pose are marked around the robot in real-time. Table 1 displays the measurements data achieved from multisensors information-fusion.

Through comparison of the single-sensor trajectory analysis in Figures 2 and 3 and the exploration trajectories of robot in Figure 5, we can see that information-fusion methods improve the performance of a robot SLAM adapting to SAR environment.

5. The Experiments for Multi-Information-Fusion SLAM

Milisavljevic’s work of sensor and data fusion [10] illustrated the multiple sensors platform for exploration environment with many targets, its associated mathematical model, and data fusion performance evaluation for various sensors. An IMU-camera sensor system developed by Panahandeh and Jansson [29] fuses inertial data from the IMU and key features of interest captured by camera.

According to the research, the experimental system consists of a ground-facing monocular camera mounted on an inertial measurement unit (IMU); the LRF data are processed to determine the distance and the relative speed of the objects (or obstacles) located in the SAR space by the laser scanning. These SLAM methods are adaptive to outdoor unstructured environments in addressing identification of SAR objects and detection of loop-closure.

A typical experiment platform used in 2D mapping is a Pioneer LX mobile robot equipped with a SICK-LMS range finder. With the embedded arnlServer software, Pioneer LX is self-guided and self-charging. The robot includes a complete robot control system and embedded personal computer (PC), differential drive system with encode system feedback, as well as LRFs, ultrasonic (sonar) sensors, and a bumper panel. As shown in Figure 6(a), a Pioneer LX mobile robot is conducting information-fusion SLAM task with installed LRF sensors, sonars, and gyro odometry. As a result, an improved updated map is formed in Figure 6(c) compared with the initial metric map in Figure 6(b).

As shown in Figures 7(a) and 7(b), a crawler mobile robot explores the SAR collapsed area and performs SLAM task, the LRF sensors and Kinect are equipped on the robot, and thus, the robot achieves multisensors information including image frames and LRF measurements data.

6. Results and Discussion

Information-fusion systems of mobile robots are designed and programmed to cover a map with their installed multisensors. In Figures 8(a), 8(b), and 8(c), the remote desk displays the robot SLAM process from PC control center.

Torra and Narukawa [1] illustrated modeling decision of information-fusion, and the quality evaluation of performance is set in Milisavljevic’s work [10] by a series of criteria of data and fusion. Literature [30] discussed local sensors fusion technique with event-based SLAM tasks, [31] presented the strategy and flowchart of a hybrid image registration algorithms, according to Reid et al. [32] and Pappas [33]; with robot operation system, the exploration tasks and mapping are subdivided among multiple robots. The use of multirobots allows SLAM tasks to complete more efficiently. As Torra and Milisavljevic’s theory [1, 10] and Blum and Liu’s work [31] illustrate, information-fusion is an effective means of research, and the main contributions are that we first develop the novelty multirobots and multisensors (MAM) search and rescue SLAM mathematical algorithms based on this resultful research means. Comparing to general single-sensor SLAM, we improved robotic SAR SLAM performances of loop-closure, objects identification, and exploration area coverage.

According to the observed, the robot explores an appointed area and locates itself in this environment based on a preinstalled metric map; the robot processes the achieved multisensors information using the proposed algorithms. As a result, the robot forms an improved updated map comparing to the preinstalled initial metric map.

Information data of each sensor are collected to fuse for building the entire map of covered area. A motion estimation approach of information is introduced for a vision-aided inertial exploration system. A local multisensor fusion technique with a scene-based global position applies to improve the localization of a mobile robot with limited SAR scenario on site. The multirobot integration SLAM enables teams of robots to build joint maps without initial knowledge of their relative poses.

Some conventional example approaches of information-fusion are roughly divided into two groups, which can be described nonmultiscale-decomposition-based (NMDB) fusion and multiscale decomposition-based fusion methods. Distributed multisensors trajectories fusion is used to identify the information coming from different local particles which mapped the same object of interest. The multisensors information-fusion system can be easily adapted and deployed into a variety of SAR scenarios. On the basis of these useful achievements, we design the information-fusion systems for mobile robots’ SLAM adapting to SAR environment.

In Figures 6, 7, and 8, the experimental results show the efficiency and accuracy of information-fusion methods in loop-closure, exploration trajectories of robotic SLAM. The trajectory mean errors are achieved by the sensors error analysis, measurement fusion, and trajectory fusion, which are displayed in Table 1 and Figures 2 and 3. The measurements data fusion estimation and trajectory fusion estimation are simulated, respectively. The error analysis shown in Algorithm 1 and Figure 4 demonstrated that the two discussed methods of information estimation fusion are much superior to single trajectory, and simultaneously, the measurements data fusion is a little better than the trajectory fusion in reaching the targets of interest. In measurements data fusion, essentially, there is no information loss. However, the computation of trajectory fusion method is even simpler and faster in exploring and reaching the targets of interest. Comparing Figure 9(a) with Figure 8(c) and comparing Figure 9(b) with Figure 6(c), the results demonstrate the deviation of robots motion trajectories and formed imprecise map with conventional SLAM; using the proposed IF-SLAM, Figures 6 and 8 formed accurate maps marked objects, loop-closure, and area coverage.

A challenge of information-fusion systems confronts that if any one of multisensors is bad to supply large error, the estimation exploring trajectory probably results in huge inaccuracy and inefficiency, even without getting to the appointed targets. Therefore, it requires highly and strictly each of multisensors should work well in SLAM while mobile robot is exploring an SAR postdisaster environment.

7. Conclusions and Future Work

Information-fusion systems are designed for mobile robots SLAM in unstructured collapsed postdisaster scenarios. The mobile robot achieves various pieces of information from multisensors’ systems and then fuses the information into an exploration integrated trajectory by using the optimal algorithms. The information-fusion incorporates two methods of the measurement fusion and trajectory fusion, which improves the SLAM performance of mobile robots through estimating the exploration trajectory so as to reaching the targets accurately and efficiently.

In this study, information-fusion systems are designed, which incorporated mobile robots, multisensors (e.g., LRF sensor, localization sonars, gyro odometry, and Kinect camera), and the algorithm that integrates R-B and EKF into an IPF procedure. We validated the proposed information-fusion methods by using a Pioneer LX and a crawler mobile robot installed multisensors. The results demonstrate that the designed information-fusion systems are well adapted to SAR scenarios and improve mobile robots SLAM performance of object identification, loop-closure, and exploration area coverage. In the future work, we will research the multisensors and the optimal fusion algorithms, which makes information-fusion method to integrate more and novel sensors and to obtain high precision estimation exploring trajectory adapted to the SAR environment.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research is supported by the NSFC (National Nature Science Foundation of China) under Grants nos. 61573213, 61473174, and 61473179.