Abstract

Simultaneous localization and mapping (SLAM) is a typical computing-intensive task. Based on its own computing power, a mobile robot has difficult meeting the real-time performance and accuracy requirements for the SLAM process at the same time. Benefiting from the rapid growth of the network data transmission rate, cloud computing technology begins to be applied in the robotics. There is the reliability problem caused by solely relying on cloud computing. To compensate for the insufficient airborne capacity, ensure the real-time performance and reliability, and improve the accuracy, a SLAM algorithm based on edge-cloud collaborative computing is proposed. The edge estimates the mobile robot pose and the local map using a square root unscented Kalman filter (SR-UKF). The cloud estimates the mobile robot pose and the global map using a distributed square root unscented particle filter (DSR-UPF). By using sufficient particles in the cloud, DSR-UPF can improve the SLAM accuracy. The cloud returns the particle with the largest posteriori probability to the edge, and the edge performs edge-cloud data fusion based on probability. Both the simulation and the experimental results show that the proposed algorithm can improve the estimation accuracy and reduce the execution time at the same time. By transferring the heavy computation from robots to the cloud, it can enhance the environmental adaptability of mobile robots.

1. Introduction

Since the beginning of the 21st century, autonomous navigation technology has received increasing attention [1, 2]. It is among the disruptive technologies predicted by academic institutions and groups, such as the McKinsey Global Institute, Citi GPS MIT, and the ARK investment management company [3, 4]. Simultaneous localization and mapping (SLAM) is the key to realize autonomous navigation, and it has been widely used in unmanned vehicles, unmanned underwater vehicles, unmanned aerial vehicles, augmented reality, and in other fields.

The traditional SLAM algorithms rely on the mobile computing resource carried by a robot to estimate the environmental map and the robot pose. SLAM is a computing-intensive task, and the limited computing power makes mobile robots difficult to meet the real-time requirement. Benefiting from the rapid development of wireless communication technology, cloud technologies have been applied in robotics. The cloud robot was first proposed by Kuffner [5]. It combines cloud computing technology and robotics and has become a trending research direction [6]. A growing number of cloud platforms, applications, and services have been proposed for robotics. DaVinci, Rapyuta, RoboEarth, sensor-cloud, etc. transfer the heavy computation from robots to the safe computing environment in cloud [7, 8]. The humanoid robot developed by the Aldebaran Robotics relies on the cloud platform to realize speech recognition, face detection, and video acquisition [9]. A fast object capture method based on Dex-Net as a Service (DNaaS) is proposed and verified by the PR2 robot [10]. Pandey et al. designed a cloud underwater robot to collect and monitor marine data, and it completed tasks by sharing local and remote resources [11].

These above studies have proven the practicability and validity of cloud robotics. As a result, new solutions for SLAM have emerged. These solutions divide the SLAM process into two stages: data acquisition in the robot and data computing in the cloud. Arumugam et al. adopted a Hadoop cluster to boost the particle sampling of the FastSLAM algorithm [12]. This algorithm can improve the efficiency of map construction and shares data with other robots using the software as a service (SaaS) model. However, Hadoop is designed for batch processing and not essentially designed for real-time performance [13]. Its poor real-time performance limits its application in online SLAM environments. Kamburugamuve et al. used a distributed framework to implement the GMapping algorithm [14]. This framework is based on the IoTCloud platform which connects smart devices to cloud services for real-time data processing [15]. Lidar and inertial sensor data are sent to the cloud as stream events. The cloud executes SLAM computing, and the results of robot position and mapping are returned to the robot. IoTCloud is designed to make a connection between the Internet of Things (IoT) devices and the cloud, and its application to the real-time SLAM needs to be verified. The fault tolerance of this framework also should be improved. By the RoboEarth platform, C2TAM transfers the SLAM computing from the robot to the cloud. It uses the parallel tracking and mapping (PTAM) algorithm to achieve multirobot cooperative tracking and map creation tasks [16]. The limitation is that the high requirement for bandwidth is needed, and it is difficult to apply to other sensing environments, such as laser radar. Xu and Bian designed a cloud robotic application platform based on the container technology. The SLAM computation as microservices deployed in the container is provided [17]. The real-time and stability of this method is needed further verification. Liu et al. designed an RGB-D SLAM method based on the cloud computing [18]. It executes mapping, location, map fusion, and loop detection in the cloud. However, the feature detection, pose estimation, and 3D point cloud stitching are still executed in the robot, and the computational load of the robot is still heavy.

To overcome the disadvantage of insufficient capacity and the overburdening in the SLAM process of mobile robots, a SLAM algorithm based on edge-cloud collaborative computing is proposed. The cloud executes a distributed square root unscented particle filter (DSR-UPF) to build the global map by stream computing. For meeting the real-time requirement, the edge builds a local map through a square root unscented Kalman filter (SR-UKF) to implement the fast SLAM computing. The main contributions of this paper are stated as follows: (1)Comparing the traditional cloud-based SLAM algorithms, the proposed algorithm not only ensure the real-time performance but also improves the accuracy and efficiency by the edge-cloud collaborative architecture(2)Comparing the batch process of traditional cloud-based SLAM algorithms, stream computing with high real-time and low latency is used to construct the cloud computing framework. By distribute parallel computing, DSR-UPF can takes advantage of its sufficient power to improve the execution performance, and the accuracy is improved by the sufficient particles. To overcome the drawback of linear approximations to the motion function in traditional particle filter SLAM algorithms, each particle uses SR-UPF to samples particle in a highly accurate manner(3)In EKF-SLAM, the computation increases exponentially with the increase of landmarks. To ensure the real-time performance of the edge, SR-UKF is used to build the local map. By only dealing with the recently observed landmarks, it reduces the computation

2. Background

2.1. SLAM Problem

SLAM is a process that enables a mobile robot to build a map and locate itself at the same time in an unknown environment. The goal of SLAM is to obtain the best estimation of the robot pose and the map based on the control data : and observation data : from the start to time . It can be described by a posteriori probability distribution as follows: where is the robot pose at time and is the entire map.

According to the Bayes rule, the law of total probability, and the Markov hypothesis, the posteriori probability distribution of the robot pose and the environmental map can be rewritten as follows:

In general, the integral in Eq. (2) cannot be evaluated in a closed form. The extended Kalman filter (EKF) and the particle filter (PF) are simple approximations of the general Bayes filter.

2.2. PF-SLAM

The EKF-SLAM (extended Kalman filter SLAM) and the PF-SLAM (particle filter SLAM) are two widely used solutions to the SLAM problem.

For PF-SLAM, the greater the particle number, the more accuracy SLAM can obtain. More particles mean more computation, and the traditional algorithm relying only on the robot’s own resources cannot meet the real-time requirements. In reference [19], simulation experiments with 100, 300, and 500 particles are executed. The experimental results show that 500 particles obtained a more accurate estimation of the environment map and the robot pose than 100 particles and 300 particles. In order to control the particle number and improve the accuracy simultaneously, scholars have improved and optimized the process of the particle sampling and resampling, but the problem of massive computation of the SLAM algorithm based on particle filter has not been fundamentally solved [2022].

2.3. Stream Computing

The traditional data processing architecture is mainly aimed at transactional data processing scenarios with low real-time requirements. It is difficult to apply in scenarios that have high real-time requirements. Stream computing uses distributed ideas and methods to process data in real-time. Stream computing can provide huge computing power for mobile robots to process data to ensure that low-cost robots can make more complex sensing data processes and decision-making [23].

3. The SLAM Architecture Based on Edge-Cloud Collaborative Computing

The system architecture of the proposed SLAM algorithm is shown in Figure 1. It is composed of the edge and the cloud. The edge is the mobile robot, and the cloud is a stream computing cluster.

The edge includes five modules: data acquisition, message client, local SLAM computing, edge-cloud fusion, and local storage. The data acquisition module is used to acquire the lidar data and the inertial navigation data and format them into a unified form. The message client realizes sealing, sending, receiving, and unsealing of messages. The local SLAM computing module performs a real-time SLAM computing to obtain the estimation of the mobile robot pose and the local map. The local storage module stores the map information in a certain time. Based on the probability distribution, the edge-cloud fusion module fuses the SLAM computing results from the edge and the cloud.

The cloud includes four modules: message server, coordination monitoring, distributed SLAM computing, and global storage. The message server realizes a high availability and load balancing message service by a Kafka cluster which completes a reliable communication between the cloud and mobile robots. The Hadoop distributed file system (HDFS) is used to store the global map, sensor data, path information, particle information, etc. The coordination and monitoring module realizes the coordination, monitoring, and management between the control node and the computing nodes. As a distributed process coordination, zookeeper coordinates the SLAM computing actions and makes the distributed SLAM more reliable. The distributed SLAM computing module is based on the Flink stream computing platform.

4. The SLAM Process Based on Edge-Cloud Collaborative Computing

The process flow of the proposed algorithm is shown in Figure 2. In each control decision cycle, the acquired data is transferred to the edge and the cloud, which execute SLAM computing at the same time. In the edge, the robot pose and the local map are computed by the SR-UKF. The cloud performs loop detection, distributed SLAM computing, and priori knowledge correction in parallel. The distributed SLAM uses the DSR-UPF, and each particle is calculated separately. The particle with the largest posteriori probability is returned to the edge. The edge realizes the fusion between the edge SLAM result and the cloud SLAM result. The edge updates the state of the mobile robot and saves the local map. Finally, the cloud executes resampling according to the particle diversity and saves the global map information to the HDFS and the cache.

4.1. Process Flow of the Edge

The process flow of the edge is mainly divided into six steps: data acquisition, message transmission, edge SLAM, message reception, fusion of the edge and cloud SLAM results, and local map storage.

4.1.1. Data Acquisition

The mobile robot acquires the control data from the inertial navigation system (INS) and the observed data from the lidar.

4.1.2. Message Transmission

The mobile robot encapsulates the control data and the observed data in a control cycle into a message. Each message is marked with a unique sequence number, and this unique number is also encapsulated to the reply message from the cloud. It is used to identify an interactive process between the edge and the cloud. The Kafka client transmits this message to the cloud by the communication network.

4.1.3. The Edge SLAM Based on the SR-UKF

The SR-UKF (square root unscented Kalman filter) SLAM algorithm is the improvement of the EKF-SLAM which is the process of iterative updating for the system state and the covariance matrix. With the increase of landmarks, the system state vector and covariance matrix become larger, and the computation increase dramatically. In order to ensure the real-time performance, the edge uses the SR-UKF algorithm to build the local map, and only the recently observed landmarks are used for the map updating. It avoids the covariance matrix being too large and reduces the computation. Compared with EKF, SR-UKF avoids the linearization model error and improves the system accuracy.

The SR-UKF constructs the statistical characteristics of the system by sigma points, which can characterize the system state vector. is the dimension of the system state. First, the SR-UKF initializes the system mean, the square root of the covariance matrix, and the weights of sigma point as follows: where and are the mean and the covariance of the initial state. chol is a Cholesky factorization that decomposes a symmetric, positive-definite matrix into the product of a lower-triangular matrix and its transpose [24]. is the scaling parameter. determines the spread of the sigma points around the system mean, and it is usually set to . β is used to incorporate the prior distribution of the state variable, and it is set to 2 for a Gaussian distribution. is a secondary scaling factor, and it is set to 0. is the mean weight of the -th sigma point, and is the covariance weight.

At time , the mean and the square root of the covariance matrix of the system state are and . The iterative update of and is by sigma point calculation, prediction, and observation update.

(1) Calculate the Sigma Point. where the scale η is calculated as follows:

(2) Prediction. The Sigma point at time is calculated as follows: where is the motion model function and is the control data.

The predicted mean and the square root of covariance matrix are calculated by Eqs. (9)–(12). is calculated by a QR decomposition, which decomposes a matrix into a normal orthogonal matrix and an upper triangular matrix . To ensure the semipositive of the square root of the covariance matrix, the Cholesky update () function [25] is used to update . where qr is to denote a QR decomposition of a matrix. The sigma point is updated as follows:

(3) Update by Observation. By using the observation model, the landmark observation at time is predicted as follows:

QR decomposition and are used to update as follows:

The Kalman gain is calculated as follows:

Finally, the mean and the square root of the covariance matrix are updated by the Kalman gain.

4.1.4. Message Reception

The edge receives messages by the Kafka client. To ensure the system reliability, the exact one mode is adopted to ensure that message processing and submission feedback are in the same transaction or atomicity.

4.1.5. Local Map Storage

The local map and the robot position from the result of the edge-cloud fusion are stored in the edge.

4.2. Process Flow of the Cloud

The process flow of the cloud is mainly divided into six steps: message reception, parallel computing, particle combination, computing result transmission, resampling, and global map storage.

4.2.1. Message Reception

Based on the Kafka cluster, the control and the observation data from the mobile robot are received. The data are distributed to the loop detection, the distributed SLAM, and the priori knowledge correction modules for concurrent execution.

4.2.2. Parallel Computing

(1) Loop Detection. Loop detection compares the observation information in all periods stored in the cloud. The scan-to-scan method is used to loop detection, which matches two frames of lidar data. The Iterative Closest Point (ICP) algorithm is used to match two frames by calculating the rigidity change between two pairs of point clouds [26]. If the lasted received data are highly similar to the data at a certain time in history, a closed loop is formed. With the closed loop, the cumulative error is reduced.

(2) DSR-UPF SLAM. The DSR-UPF (distributed square root unscented particle filter) SLAM is a distributed parallel PF-SLAM algorithm. In the PF-SLAM, the particle number determines the algorithm accuracy. More particles can bring not only the higher accuracy but also the more computation. In order to improve the accuracy, the cloud adopts enough particles based on the cloud computing power, and it executes the PF-SLAM algorithm in a distributed manner. In the DSR-UPF SLAM, each particle is independently associated with the estimation of landmarks. At time , the particle set is shown in Figure 3. is the estimated pose of the -th particle, and is the weight of this particle. represents the mean and the covariance matrix of the -th landmark estimation of the -th particle, respectively.

As shown in Figure 4, each particle is estimated independently.

Each computing node predicts and updates the new pose by the SR-UKF, and it updates the landmark position by the extended Kalman filter (EKF). Landmark updating depends on whether the landmark is observed at time . If the landmark is newly observed, its mean and covariance are initialized as follows: where is the inverse function of the observation model and is the Jacobian matrix of function . If the -th landmark is not observed at time , its mean and covariance remain unchanged. If the -th landmark is observed at time , the mean and the covariance are updated as follows: where is the Kalman gain and is the Jacobian matrix of the observation model.

The particle weight is defined as the ratio of the target distribution over the proposal distribution. The -th particle weight is defined as follows:

The proposal distribution can be represented by a recursive form as follows:

The Bayes rule is used to calculate the weight as follows: where the proposal distribution is calculated as follows:

(3) Prior Knowledge Correction. Most SLAM algorithms assume that the control noise statistics and observation noise statistics are completely known and correct. Because of the complexity of the real world, this assumption is hard to be tenable. Incorrect a priori knowledge about the control and the observation noise matrices can seriously degrade the accuracy of these algorithms [2730]. Based on the previous research [31], a dynamic fractional order and alpha stable distribution particle swarm optimization method is adopted, and the prior knowledge and are adjusted dynamically by a fitness function. This fitness function is based on the inconsistency between the predicted observations and the observations. By introducing the prior knowledge correction step, it can reflect the real values of the prior knowledge more accurately.

4.2.3. Particle Combination

After all particles are updated, the effective particle number is calculated as follows: where is the number of particles, and is the normalized weight of the -th particle. If is less than the specified threshold, the particle set is resampled. After resampling, all particle weights are reset as follows:

4.2.4. Computing Result Transmission

The particle with the largest weight is returned to the edge.

4.2.5. Particle Resampling

The SLAM algorithms based on particle filter will degenerate over time no matter how many landmarks in the environment and particles are used in the estimation [32]. Resampling aims to amend the particle degeneracy, but it always leads to the loss of the particle diversity, which is called particle depletion [33]. To improve the diversity of particles as much as possible, some biological evolutionary algorithms have been applied for resampling [34, 35]. To prevent particle degeneracy and improve particle diversity, an improved genetic resampling method is proposed. It uses a double roulette wheel as the selection operation and a fast metropolis Hastings (FMH) mutation as the mutation operation. The FMH mutation can improve the divergence problem of the traditional mutation methods and produce particles that can better reflect the target distribution [36]. The main procedures of particle resampling are listed as follows:

Input: M initial particles
Step:
1. particlesM1=SelectParticles(particles, Ps);
2. particlesM2=FastMutation(particlesM1, Pm);
3. particlesM3=Crossover(particlesM1, Pc);
4. newParticles=particlesM1+particlesM2+particlesM3;
Output: M new particles

The SelectParticles method uses the double roulette selection to select M1 () parent particles from initial particles. The FastMutation method uses the FMH mutation to generate M2 particles from the M1 parent particles. The Crossover method uses the crossover operation to generate M3 particles from M1 parent particles.

, , and represent the probability of the selection, crossover, and mutation, respectively. is set to 0.5, and the values of and are determined by the particle diversity as follows: where the function is to calculate the sum of the distances between the -th particle and the other particles. represents the maximum value of from the beginning. and are calculated as follows:

4.2.6. Global Map Storage

The global map, sensor information, and all particles information are stored the distributed cache and the HDFS.

4.3. Process Flow of the Edge-Cloud Fusion

The edge-cloud fusion is the combination of the edge results and the cloud results. To improve the reliability and overcome the problems caused by transmission delay, the edge-cloud fusion depends on the data transmission time. If the cloud result is not returned to the edge in real-time, the edge-cloud fusion is no longer executed in the edge, and the edge result is directly used as the SLAM result.

Based on the fusion principle of the posterior probability function ratio, the fusion of SLAM results from the cloud and the edge is realized. The posteriori probability of the edge SLAM is shown as follows. where is the edge estimation of the mobile robot pose at time and is the covariance matrix of the estimation at time . is the edge estimation of the environment map which includes the landmark estimations. is the estimated means of the -th landmark, and is the relevant covariance matrix. is the estimated pose at time . is the real observation of the -th landmark, and is the predicted observation of the -th landmark. is the predicted robot pose based on the motion model.

The posteriori probability of the cloud SLAM is shown as follows. where is the cloud estimation of the mobile robot pose and is the cloud estimation of the environment map. is the -th landmark estimation, and is the predicted observation of the -th landmark.

The pose and map estimation fusion for the mobile robot are written as follows:

5. Experimental Results and Discussion

To verify the performance of the proposed algorithm, comparisons between the proposed algorithm, the FastSLAM 2.0 [37] and the UFastSLAM [38] are made. FastSLAM 2.0 and its variants become the most widely used laser SLAM solutions. They have been applied in the fields of unmanned vehicles, unmanned aerial vehicles, unmanned ships, mobile robots, and so on. As an important improvement of FastSLAM 2.0, UFastSLAM applies UKF to the iterative updating of the mobile robot pose. UFastSLAM is widely used not only in laser SLAM but also in visual SLAM [39].

5.1. Simulation Experiments

In the simulation environment, the system state is described by the Cartesian coordinate system. The motion and observation models are as follows: where () represents the mobile robot position. The heading is the motion direction at time . Its value range is [- π, π] rad. () is the -th landmark position. and denote the control and observation data, respectively. includes the velocity and the steering angle αt. represents the range-bearing observation from the robot to the -th landmark, is the distance, and is the angle from 0 to 2π clockwise. is the sampling interval, and is the wheel base.

In the simulation environment, a low-cost embedded control board is simulated as the whole mobile robot system. As a control center, it is widely used for portable and low-cost mobile robot systems. The control board is equipped with 1.5 GHz CPU with 4 core processors and 2G memory and installed a robot operating system (ROS). The control board simulates the robot moving process according to the predefined path. To simulate the real environment, the speed and angle are added a random noise, and the data with noise are used as the control data. The distance and angle between the landmarks and the robot are calculated. The distance and angle with a random noise are taken as the observation data. This process simulates the acquisition process of control data and observation data. The maximum driving speed is 3 m/s, and the maximum steering angle αt is rad. The speed noise εv is 0.3 m/s, and the angle noise is rad. It is equipped a ranger bearing sensor with a π rad frontal view and a maximum range of 30 m. The range noise of the observation is 0.2 m, and the angle noise is rad. The cloud uses three servers to build a stream computing cluster by the Flink platform and the related components. The cloud server uses 16 G memory and 2.2 GHz CPU with 8 core processors.

As shown in Figure 5(a), the mobile robot moves in a area with 75 landmarks. The robot starts from (0, 0) and moves according to a series of planned target points. The FastSLAM 2.0 and the UFastSLAM use 20 particles to estimate the system state. Based on the sufficient computing power in the cloud, the proposed algorithm uses 40 particles to estimate the system state. Figures 5(b)5(d) show the experimental results of the three algorithms. The black solid line represents the motion trajectory, and the dotted line represents the estimated path by each algorithm. The black plus sign + represents the landmark position, and the circle sign ○ represents the estimated landmark position.

Estimation accuracy and execution efficiency are the most important indicators of SLAM algorithms. The comparison between the three algorithms is based on the two indicators. Estimation accuracy mainly includes the estimation accuracy for robot positions and the estimation accuracy for map. The higher the estimation accuracy, the closer the SLAM estimation is to the real state of the robot system. High estimation accuracy is beneficial to explore the unknown environment and navigate autonomously for robots. It is evident that the estimated trajectory and map by the proposed algorithm is closer to the actual system state. The error of the proposed algorithm in both path estimation and landmark position estimation is much smaller than that of the other two algorithms. The execution efficiency is the key to whether SLAM algorithms can be applied to autonomous navigation. Based on the cloud computing power, the average execution time of the single step in the proposed algorithm is the shortest execution time of 32 ms, followed by the FastSLAM 2.0 algorithm of 36 ms, and the UFastSLAM algorithm of 49 ms.

Because the control and observation noise are random, the results of each experiment are different. To obtain a more detailed and accurate comparison, 50 simulation experiments with the simulation environment of Figure 5(a) are carried out. Root mean square error (RMSE) is used to measure the deviation between the estimated value and the real value, and it can reflect well the SLAM accuracy. Table 1 shows the compared results based on RMSE.

RMSE_P represents the RMSE on the robot position estimation, and RMSE_L represents the RMSE on the landmark estimation. The result shows that the proposed algorithm is more accurate than the other two algorithms in estimating the robot position and the landmark locations. The execution time is the accumulation of the single step execution time in each experiment. The execution time of the proposed algorithm is also shorter than that of the two other algorithms. For the limitation of experimental conditions, the total number of core processors in the cloud is smaller than the number of particles. If the total number of core processors is larger than the number of particles, the parallelism of the proposed algorithm will be greatly improved, and the performance will be further improved. At the same time, if the cloud resources are sufficient, more particles can be used to improve the accuracy.

Figure 6 shows the RMSE comparisons based on time series between the three algorithms. The four subfigures show the RMSE comparisons in the -axis, the -axis, the heading, and the position. The RMSE of the proposed algorithm is less than that of the other two algorithms in each time period.

Figure 7 shows the RMSE comparison on landmark positions between the three algorithms after the loop is closed. It can be seen that the landmark estimation errors of the proposed algorithm are fewer than that of the other two algorithms.

Consistency means that the deviation between the estimated state and the true state should be kept at a roughly constant level. The normalized estimation error squared (NESS) [40] is often used in measuring the SLAM consistency over the Monte Carlo runs and is defined as follows: where and are the estimated mean and the covariance of the robot pose, respectively. is the real pose. Given runs, the average NEES is computed as follows:

For the 3-dimensional vehicle pose and the 50 Monte Carlo runs, the two-sided 95% probability region is bounded by the interval [2.36, 3.72]. If exceeds the upper bound, the SLAM algorithm becomes optimistic [41]. Figure 8 shows the average NEES comparison between the three algorithms. It can be seen that the FastSLAM 2.0 algorithm becomes rapidly optimistic while the average NEES of the proposed algorithm maintains a low level for a long time.

In order to further verify the effectiveness of the proposed algorithm, a map with the sparse landmarks as shown in Figure 9 is constructed. 50 simulation experiments are carried out for this environment. Table 2 shows the compared results between the three algorithms based on RMSE. Comparing the experiment map of the Figure 5, the landmarks are decreased from 75 to 35. The decrease reduces the SLAM computation and estimation accuracy. Compared with the other two algorithms, the proposed algorithm has higher accuracy of the robot pose and landmark position estimation and less calculation time.

The experimental results show that the proposed algorithm has higher accuracy and less computation time, whether on the map with the dense landmarks or the sparse landmarks.

5.2. Experiments with the University Car Park and Victoria Park Dataset

The University Car Park and the Victoria Park dataset are collected by the Australian Centre for Field Robotics (ACFR) in Sydney. They are popular in the SLAM research community. A truck equipped with GPS, inertial, and laser sensors is tested. The motion model of the truck is shown in Figure 10.

The motion and the observation model are as follows: where is the velocity of the center of the axle, is the velocity of the back left wheel, is the sampling interval, and αt is the steering angle.

5.2.1. The University Car Park Dataset

The University Car Park dataset [42] is used to compare the FastSLAM 2.0, the UFastSLAM, and the proposed algorithm. The experimental environment is shown in Figure 11(a), and the results of the three algorithms are shown in Figures 11(b)11(d). The estimated trajectory by the proposed algorithm is the most consistent with the GPS trajectory. It indicates that the estimation accuracy of the proposed algorithm is better than that of the UFastSLAM and the FastSLAM 2.0 algorithms.

Table 3 shows the estimation accuracy comparison between the three algorithms where is the average distance error for the robot position estimation and is the average distance error for the landmark position estimation. As shown in Table 3, the estimated error of the proposed algorithm is less than that of the FastSLAM 2.0 and the UFastSLAM whether on the robot position estimation or the landmark position estimation. The execution time of the proposed algorithm is also less than that of the UFastSLAM and the FastSLAM 2.0. The single step execution time of the proposed algorithm is 29 ms, which is less than the sampling interval. It can meet the real-time requirements.

5.2.2. The Victoria Park Dataset

The Victoria Park dataset [43] is used to compare the three algorithms. The experimental environment is a large-scale environment, as shown in Figure 12. The dataset records the sensor data obtained by manually driving the intelligent vehicle for approximately 3.5 kilometers in 1545 seconds. The vehicle collects 61945 frames of inertial navigation data, 7249 frames of lidar data, and 4461 frames of GPS data.

The main obstacle in the environment is trees. For the low installation position of lidar, the trunk is used as the landmark. Due to the shelter of trees, the GPS data is discontinuous. The positions of these natural landmarks are not measured, and the true position of landmarks cannot be marked on the map. Figure 13 shows the experimental results.

The black line represents the GPS data, the dotted line represents the path estimated by each algorithm, and the plus sign + is the estimated landmark position. For the discontinuity of the GPS data and lack of the natural landmarks position data, the difference between the estimated position and the real position is not calculated. As shown in Figure 13, the estimated path by the proposed algorithm is more consistent with the GPS trajectory. For the linearization error of the FastSLAM 2.0 algorithm, the estimated path does not match the GPS trajectory. Because the cumulative error of the UFastSLAM and the proposed algorithm is small, the difference between the estimated path and GPS trajectory is less than that of the FastSLAM 2.0 algorithm. The execution time of the proposed algorithm is 674 seconds, which is less than the 1029 seconds of the UFastSLAM algorithm and the 952 seconds of the FastSLAM 2.0 algorithm.

6. Conclusions

The emergence of edge-cloud collaboration provides a new solution to the SLAM problem. The distributed and parallel stream service on the cloud extends the computing capacity of mobile robots and improves the accuracy of SLAM with more particles. The edge builds the local map to meet the real-time requirements. The cloud changed the traditional serial sampling and resampling process to distributed parallel execution, and the cloud computing power has been used to improve the execution efficiency. Edge-cloud collaborative computing can compensate for the lack of compute ability of mobile robots and enhance their environmental adaptability.

Based on the simulation environments and two public datasets, the proposed algorithm, FastSLAM 2.0, and UFastSLAM are compared. Experimental results show that the proposed algorithm has the highest estimation accuracy and the lowest RMSE compared with the other two algorithms. The experimental results also prove that the proposed algorithm has the fastest execution time, which can ensure the real-time performance of SLAM. For SLAM algorithms based on particle filter, the accuracy can be improved by increasing the particle number. Increasing the processor core number in the cloud can enable the proposed algorithm to use more particles to improve the accuracy without increasing the computing time. The accuracy and efficiency of the proposed algorithm can be further improved while the cloud computing power is increased. Compared with the other two algorithms, the proposed algorithm transfers the heavy computation from robots to the cloud, and it can enhance the environmental adaptability of mobile robots. The proposed algorithm not only ensures the real-time performance but also improves the accuracy and efficiency by the edge-cloud collaborative architecture. Based on the low-cost edge and the performance improvement, the proposed algorithm has the high utility and promotion value.

The proposed algorithm combines power of the edge and the cloud to improve the reliability and efficiency. However, several aspects need to be studied further. (1) The proposed algorithm can be regarded as a hybrid SLAM algorithm. The particle number and the transmission delay in the cloud are important factors affecting the fusion results. The further study is to optimize the proposed algorithm by testing the edge-cloud fusion effect under different network environments and different particle numbers. (2) In experiments, the test of single robot edge-cloud fusion is completed. In the future, the experiments of multirobot edge-cloud fusion will be completed. The resource scheduling and task optimization under the sudden situation of big streaming data will be the research focus.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no competing interests.

Acknowledgments

This work was financially supported by the China Postdoctoral Science Foundation (2019M651844), Jiangsu Postdoctoral Science Foundation (2018K035C), Excellent Scientific and Technological Innovation Team of Jiangsu Higher Education Institutions of China (Maritime big data team), Six Talent Peaks Project in Jiangsu Province (XYDXX-149), and Young Academic Leaders for QingLan Project of the Jiangsu Higher Education Institutions of China.