Abstract

The paper proposes an algorithm based on the Multi-State Constraint Kalman Filter (MSCKF) algorithm to construct the map for robots special in the poor GPS signal environment. We can calculate the position of the robots with the data collected by inertial measurement unit and the features extracted by the camera with MSCKF algorithm in a tight couple way. The paper focuses on the way of optimizing the position because we adopt it to compute Kalman gain for updating the state of robots. In order to reduce the processing time, we design a novel fast Gauss–Newton MSCKF algorithm to complete the nonlinear optimization. Compared with the performance of conventional MSCKF algorithm, the novel fast-location algorithm can reduce the processing time with the kitti datasets.

1. Introduction

The odometers of robots can divide into two parts: motion propagation and state updating end [1, 2]. The motion propagation is computing the motion matrix with the data of the sensors. Then, the result updates through the updating end. Many robots adopt the cameras to construct visual odometer because the cameras are conveniently mounted. [3] The visual odometers extract the matching features from the video frames to propagate the motion matrix. However, there are many drawbacks at the visual odometer such as the long processing time and the error because of mismatching features. In order to reduce the processing time, some researchers applied two threads to compute the motion equation and update the result, respectively [4, 5]. However, it will increase the hardware cost of the location systems by this way. In order to improve the processing time without additional hardware, Chansoo used the corner features to replace the ordinary features [6]. However, the information of the features sometimes will lose to make it hard to compute the location of robots. Moreover, it is difficult to extract the corner features in the texture flaw environment. In this way, the error of the system becomes big. In order to reduce the processing time and improve accuracy of visual odometer at the same time, more and more researchers began to adopt different sensors to compensate for the drawbacks of the visual odometers [712]. GPS can get the accurate location of the robot in the outdoor environment. Inertial measurement unit (IMU) can compensate for the error in the poor GPS signal environment [13, 14]. For the application of the robot in the poor GPS signal environment, many researchers adopted IMU to construct the visual inertial odometer (VIO) [1519]. In order to compute the motion matrix and construct the map of robots by the VIO, we adopt the MSCKF algorithm. The motion propagation of MSCKF algorithm use the data of IMU to propagate the motion matrix. The update-end of MSCKF algorithm used the observed constraint features to update the motion matrix under the extended Kalman filter (EKF) framework [16, 20, 21]. However, the performance of conventional MSCKF about the processing time and the accuracy is bad. Ramezani et al. expanded the range of the observed features to improve the accuracy of the MSCKF algorithm [22]. Li et al. constructed the model of the error for the gravity accelerator and the bias error of the IMU to improve the accuracy of the MSCKF algorithm [23, 24]. Some researchers add the close-loop detection to update the initial point of the robot [24, 25]. Those methods improved the accuracy of the system but increased the processing time [25]. In order to improve the processing time, Eckenhoff et al. proposed the online verification to reduce the update time and improve the accuracy of the MSCKF algorithm although it will increase the hardware cost [26]. Sejong Heo and others applied the slide windows to replace the ordinary method of extracting the features of MSCKF algorithm to reduce the computation amount and improve the processing time [17, 27, 28]. However, the processing time did not significantly reduce. Some researchers focused on the improvement of the back end of MSCKF algorithm to improve the processing time because the update end of the MSCKF algorithm based on the EKF filter will cost much process time. Some researchers applied the Cubature Kalman Filters (CKF) and Maximum Likelihood Estimate (MLE) method to replace the EKF although the performance of the processing time still is not ideal [29, 30]. Alibay et al. applied the stage Random Sample Consensus (RANSAC) method to propagate the motion matrix to meet the requirement of real time [31]. Nikolic applied Field Programmable Gate Array (FPGA) to complete the preprocessing the data of IMU to reduce the processing time [32]. However, those methods will increase the hardware cost.

In order to reduce the processing time, we design the novel fast Gauss–Newton MSCKF algorithm without additional hardware cost. The key point of our work is calculating the position and the attitude optimized by the novel nonlinear method. Then, we use the result to compute the Kalman gain in EKF framework.

MSCKF algorithm is based on the EKF architecture. The algorithm is composed of three parts: propagation, augmentation, and update [16]. We use the state of IMU to propagate the state of robots at the state propagation. The propagation equation is as follows [16]:where F means the coefficient matrix of the IMU state matrix that calculated by the accelerator matrix of IMU and the gyroscope data matrix and G means the coefficient matrix of the noise matrix. Then, we compute the Jacobin matrix to complete state augment. We define the matrix of the state of the No. j frame as the symbol . The matrix is constructed of the covariance matrix of the state of the IMU , the covariance matrix between IMU and the camera , and the covariance matrix of the cameras :

Considering the association between the features, MSCKF algorithm updates the propagation by the collection of the observed feature. At the same time, the MSCKF algorithm eliminates the features that are unobserved and worthless. Then, we can compute the reprojection errors of the observed features with nonlinear optimization to update the state matrix. Now, the observation equation denotes as follows:where r means the residual error of the features, H denotes the Jacobin matrix, and denotes the state propagated through the above motion equation. Now, we need to compute the matrix H to get the Kalman gain. The residual error can compute by the difference between the real position of the jth feature and the projected value of the position. The process denotes as follows:

In this way, denotes the estimated position of the No. j feature observed by the No. i camera. means the position of the No. j feature. In order to reduce the amount of computation, it does QR decomposition of to accelerate the solution of equations. Then, we get the upper triangular matrix that is nonsingular matrix:

Now, we can use to get the Kalman gain:where P means the state matrix of the camera and means the noise matrix. The state covariance of No. j + 1 frame can be updated as follows:

3. Our Method

The value of determines the result of the Kalman gain in equation 6. Calculating the costs the most processing time, the conventional method adopts the Gauss–Newton optimization method to compute the value of that used to calculate . In order to reduce the processing time, we employ the fast Gauss–Newton method to make the nonlinear optimization. The whole process of optimization is divided into two stages.

At the first stage, the difference is so big that we use the gradient descent method to optimize. We make the Taylor expansion of the equation as follows:

We can get the gradient descent of is . However, the gradient descent method may cause the diverged result because iterative step size is too big.

At the second stage, we use the Gauss–Newton method to get when the optimized value is very close to the prior value to keep the convergence of algorithm. The objection of Gauss–Newton algorithm is to minimize . So, we get the solution of the derivation. We get the Gauss Newton equation:

Then, we can compute the value of as follows:

In order to compute the real value of motion matrix, we bring the prior value of the coordinate into the above equations. If we describe the prior-estimated value of position of the No. j frame as follows:where means the prior-estimated value of the position of the jth frame and means the matrix of rotation quaternion We can change the above equation by normalization as follows:

In order to make clearer of the expression, we describe as follows:

Now, we can update as follows:

Now, we update the error between the estimated value and the prior value as follows:

The prior value of is normalization value in this. We can use to stand for the effect of the iteration. We describe as follows:

In the above equation, R denotes the weight of noise which includes the bias of gyroscope and the bias of accelerator velocity. In order to reflect the effect of the derivation, we describe the difference between the prior value and the optimized value as :

At the first stage, when is bigger than the threshold value , we choose gradient descent as the value of . By the way, the value of is the empirical value as 0.5. Now, we describe the step of the derivate as follows:where means the Jacobian matrix of the error . At the second stage, when the optimized value is close to the real ones that means the value of is less than . In order to guaranteed convergence of matrix, we choose Gauss–Newton algorithm to complete nonlinear optimization. The step of denotes as follows:

If is smaller than 0.01, the algorithm converges, and we can stop iteration.

4. Experiment

The experiment of the paper based on kitti datasets. The preprocess of the datasets is extracting the features and checking the external parameters of the cameras [33]. In the processing of the computation, we choose the value of the bias of acceleration and the gyroscope as . We also choose the same value as the noise of the accelerator and the gyroscope. The result are shown in Table 1.

There are 98 frames of No. 1 datasets at Table 1. Compare the processing time of the conventional MSCKF algorithm, and we found that the processing time decreased by 20.7% when we adopt the novel fast Gauss–Newton MSCKF algorithm. Then, we calculate the final errors by the difference between the calculated result by different algorithms and the real value. Comparing the final errors of the MSCKF algorithm from the IMU data alone, the final error of conventional MSCKF decreases by 56.5%. The final error of the novel fast Gauss–Newton MSCKF decreases by 55.1%. The final error calculated by the novel MSCKF algorithm increases by 3.0% than the one calculated by the conventional MSCKF algorithm.

There are 239 frames of No. 2 datasets at Table 2. Comparing the processing time of conventional MSCKF algorithm, the processing time of the novel fast Gauss–Newton MSCKF decreased by 6.0%. Comparing the final error of conventional MSCKF with the data of IMU alone, the final error of conventional MSCKF increased by 55.0%. The final error of the novel Gauss–Newton MSCKF algorithm increased by 6.0% relative to the final error of the conventional MSCKF algorithm.

Then, we can compare the trajectory computed by the conventional MSCKF algorithm with the one computed by the novel MSCKF algorithm. In order to make clear the result, we also compare the trajectory computed by the conventional MSCKF using the data of IMU alone with the global truth trajectory.

There are the trajectories calculated by different algorithms. The blue trajectory calculated by the conventional MSCKF algorithm is shown in Figure 1. The blue one of Figure 2 means the trajectory calculated by the novel Gauss–Newton MSCKF algorithm. There are red trajectories calculated by the conventional MSCKF with the data IMU alone at both figures. There are the global truth trajectories in green color at both figures. We can get the conclusion that the blue trajectory is closer to the red trajectory than the green trajectory in Figure 1. It means the conventional MSCKF algorithm can fuse the data from IMU and the stereo camera to reduce the error. The blue trajectory is also closer to the green trajectory than the red trajectory in Figure 2. It means the novel MSCKF algorithm is also able to fuse the data of IMU and the features of the cameras to reduce the error calculated by the MSCKF with the data of the IMU alone. When we compared the blue trajectories at both figures, the blue trajectory of Figure 2 is closer to the red one than the blue trajectory of Figure 1 at the beginning. However, the error of the novel algorithm becomes bigger when the distance becomes longer at Figure 2. We also notice that there is a sudden change in the middle of the trajectory in Figure 2 because there is the switching between two optimization methods.

We also calculate the trajectories by the above algorithms with No. 2 datasets to test the performance of the novel MSCKF algorithm when the VIO runs longer. The red trajectories means the ones calculated by the conventional MSCKF algorithm with the IMU data alone under No. 2 datasets in Figure 3 and 4. The green trajectories means the ground truth trajectory in Figures 3 and 4. The blue trajectory of Figure 3 means the trajectory calculated by the conventional MSCKF algorithm with No. 2 datasets. The blue one of Figure 4 means the trajectory calculated by the novel Gauss–Newton MSCKF algorithm with No. 2 datasets. We notice that the blue trajectory at Figure 3 is closer to the green one at first. However, the blue trajectory becomes deviated from the red trajectory when the distance becomes longer at Figure 3. It means the error of novel MSCKF algorithm becomes bigger when the VO runs longer. The blue trajectory becomes closer to the red trajectory at Figure 4 at first. However, when the distance x is between 50 m and 100 m, the blue trajectory is closer to the green one. It means the error computed by the novel MSCKF decrease when the computation continues. When the distance x is bigger than 100 m the error of blue trajectory became bigger. Nevertheless, there is more obvious hip change than the blue trajectory in Figure 2.

5. Conclusion

Comparing the performances of the novel MSCKF algorithm with the conventional MSCKF algorithm, the processing time of the novel MSCKF algorithm decreases by 20.7%∼6.0%. The final error calculated by the novel MSCKF algorithm increases by 2.0%∼6.0% than the one calculated by the conventional MSCKF algorithm. We can get the conclusion that the novel MSCKF can reduce the processing time. However, the final error computed by the fast Gauss–Newton MSCKF becomes bigger when the running time is longer. It means the novel MSCKF will increase the final error. We can employ it at the low precision and fast speed occasion. Then, there is obvious hip change at the trajectories calculated by the novel MSCKF algorithm because of the switching of the two optimization methods. We also find that the error calculated by the novel MSCKF becomes bigger when the VIO runs longer. In order to reduce the accumulated error, we can add the close-loop detection.

Data Availability

The data of the manuscript is from the kitti datasets which can be accessed through MATLAB. The data can be obtained from https://figshare.com/s/395bc65e815a4e4b3f2f.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Department of the Fujian Science and Technology (Grant 2018H0003), Fujian University of Technology (Grant GY-Z17011), Quanzhou Science and Technology Bureau (Grant 2017G012), Department of Education of Fujian Province (Grant JT180339), and Fujian Universtiy of Technology (JGBK201911).