Abstract

In order to further improve positioning accuracy, this paper proposes an indoor vision/INS integrated mobile robot navigation method using multimodel-based multifrequency Kalman filter. Firstly, to overcome the insufficient accuracy of visual data when a robot turns, a novel multimodel integrated scheme has been investigated for the mobile robots with Mecanum wheels which can make fixed point angled turns. Secondly, a multifrequency Kalman filter has been used to fuse the position information from both the inertial navigation system and the visual navigation system, which overcomes the problem that the filtering period of the integrated navigation system is too long. The proposed multimodel multifrequency Kalman filter gives the root mean square error (RMSE) of 0.0184 m in the direction of east and 0.0977 m in north, respectively. The RMSE of visual navigation system is 0.8925 m in the direction of east and 0.9539 m in north, respectively. Experimental results show that the proposed method is effective.

1. Introduction

In recent decades, the autonomous capabilities of mobile robots have received increasing attention, and their applications have been widely used [14]. However, the robot navigation and positioning needs to be further studied to solve new challenges.

To realize mobile robot self-navigation, many approaches can be chosen [5, 6]. The inertial navigation system (INS) and the vision navigation system (VNS) are the two most commonly used schemes which have gradually become standard sensors on mobile robot systems [7, 8]. For example, an INS is used to achieve the self-navigation in a work [9], while a visual navigation system is used in another work’s navigation tasks [10]. INS measures the navigation information employing the inertial measurement unit (IMU) without external interference, which is composed of a gyroscope, accelerometer, and magnetometer in three directions [1113]. The navigation accuracy of the INS is high in a short period; nevertheless, a disadvantage is that its errors accumulate over time. When compared with INS, visual sensors have some advantages. For example, their weight is light, power consumption is low, and their cost is low. Also, it can provide a wealth of environmental features and information [14]. However, the processing speed of visual navigation is relatively slow. Moreover, the image of the visual sensor is easily affected by light, rain, snow, and dynamic interference. Also, it is difficult to provide fully continuous navigation and positioning information when it alone is used to conduct robot navigation and positioning.

Data fusion algorithms influence navigation accuracy much after the sensors themselves [15]. Thus, many approaches are proposed to improve data fusion. For instance, extended Kalman filter (EKF) is designed for the mobile robot localization in indoor environments [8]. However, the stronger a nonlinear problem is, the more nonignorable the high-order terms obtained by the expansion of Taylor series are. In [6, 16, 17], unscented Kalman filter (UKF) is investigated for INS/GPS integration system. UKF realizes the processes of nonlinear functions through unscented transform (UT) without linearization and preserves the higher-order terms. Thus, UKF provides higher computing precision [18, 19].

This work tries to improve the localization accuracy of the indoor vision/INS integrated mobile robot navigation using multimodel-based multifrequency Kalman filter. Firstly, to overcome insufficient accuracy of visual data when the robot turns, a novel multimodel integrated scheme has been investigated for the mobile robots with Mecanum wheels, which means fixed point turning, that is, the robot position remains unchanged in the process of turning. Secondly, a multifrequency Kalman filter (MKF) is engaged for the fusion of the position given by both the inertial navigation and the visual navigation systems, which overcomes the problem that the filtering period of the integrated navigation system is too long. The experiment shows that the proposed multimodel-based multifrequency Kalman filter improves the positioning accuracy in an effective way.

The rest content is organized as follows. The indoor integrated navigation scheme employing the vision and INS data is given in Section 2, while Section 3 introduces the multifrequency Kalman filter. Experiments can be found in Section 4. Finally, Section 5 summarizes main conclusions.

2. Indoor Integrated Navigation Scheme Employing the Vision and INS Data

In this section, we will design a navigation scheme to integrate and fuse the vision and INS data in indoor complex environments. Figure 1 is the diagram of the integrated navigation scheme. The main steps of integrated navigation strategy are as follows:The INS employs the inertial measurement unit (IMU) to compute the INS-based robot’s position noted as , velocity , acceleration , and postureThe vision navigation system (VNS) employs the binocular vision camera providing the vision-based robot’s position and posture using the imagesThe data of IMU is used for judging the state of the robotThe robot state is used to correct the measurement data preliminaryThe MKF provides the navigation information by fusing INS and vision dataThe INS’s position is corrected using the output of the MKF, which is able to obtain accurate robot position information

From the integrated navigation scheme mentioned above, we can find that one important step of the integrated navigation scheme is the judgement of the robot state. This paper focuses on robots with Mecanum wheel, and we assume that the robot can only turn with constant velocity at a fixed point. When the robot turns with constant velocity, the yaw and acceleration measured by the IMU are used to judge the turning state since the INS has relatively high positioning accuracy in a short period of time.

In this paper, we employ four functions to describe the robot’s turning, which are listed in Table 1. When the following conditions are satisfied, we consider that the robot begins to turn:where is the threshold at the beginning of the turns.

When equations (2) and (3) are satisfied, we consider that the robot’s turn is over:where is the threshold at the end of the turns.

If the robot is not turning, due to the different sampling periods of the VNS and INS, the filter period of the traditional single-rate Kalman filter algorithm can only be the slow cycle or the least common multiple of the two sampling periods, resulting in the sampling period of the traditional filter being too large, and the positioning accuracy becomes worse. One method to solve this hard issue is using a multifrequency Kalman filter.

3. Engaging a Multifrequency Kalman Filter

3.1. Robot Navigation Coordinate System

This paper uses two frames: body frame and navigation frame.

In the frame, the axis is perpendicular to the axis. The navigation frame used in this paper is the E-N-U coordinate system. The coordinate system origin is located at the mobile robot center. The X-axis points to the east and the Y-axis to the north. The Z-axis is perpendicular to the horizontal plane.

During positioning, the robot needs to convert the frame to the navigation frame. Meanwhile, the state information of the camera and the coordinate transformation of the camera positioning data in the frame should be acquired. Thus, the state information of the robot in the navigation frame can be obtained. Firstly, we rotate the frame according to Euler angles so that the frame and the navigation frame coincide. Euler angle includes heading angle , pitch angle , and roll angle . The pitch angle is the difference between carrier’s Y-axis and the axial plane. The roll angle represents the angle between the horizontal axis and the ground plane. The heading angle refers to the angle between the Y-axis projections on the horizontal plane and the direction of north. A transformation matrix (in equation (4)) can be used to complete the conversion from the frame to the navigation frame.

3.2. Kalman Filter Multifrequency State Equations

The linear Kalman filter in a filtering period is usually divided into two processes: time update and measurement update. The first process is responsible for forwarding the current state variables and error covariance values, so as to construct a priori estimation for the state at the next moment, and the measurement part is responsible for feedback, that is, combining the prior estimation and new measurement variables to construct an improved a posteriori estimate.

The state formula of the multifrequency Kalman filter is listed in equation (5).where denotes the state vector at time point, is the sampling time, denotes the errors of positioning and velocity in east at time point, respectively, denotes the errors of positioning and velocity in the north direction at time point, respectively, is the system noise at time index , and the covariance of is .

The measurement equation of the Kalman filter with multifrequency is as follows:where is the INS position information at time point, is the VNS position information at time point, is the system noise at time point, and the covariance of is .

The sampling time of different sensors is different. For the navigation system integrating INS and version information, the sampling time of the IMU is more faster than that value of the VNS. At present, the sampling period of data fusion filter is usually consistent with the slow subsystem which may lose some data. To improve the localization accuracy, the multifrequency Kalman filter based on the integrated navigation scheme is proposed in this paper. The pseudocode of the multifrequency Kalman filter based on the multimodel integrated navigation scheme is listed in Algorithm 1. From the algorithm, one can infer that our multifrequency Kalman filter includes three parts, the first one is the judgement of the robot’s status (lines 3–5). In this part, if it is detected that the robot is turning, because we set that the robot is rotating in place, the VNS-based robot’s position has not changed. Thus, we can get that . In this paper, we set . Here, is the INS’s sampling time. If , it means that . Here, is the VNS’s sampling time and is one MATLAB function. We can get to do the measurement update of the multifrequency Kalman filter (line 8–12). If there is only the update of the INS’s measurement, we think that the INS’s error is huge. Thus, both and tend to infinity. Thus, , and we can get that and (lines 12–15).

(1)Input:, , , , ; Output:
(2)  fordo
(3)   if The robot turns then
(4)     ;
(5)   end if
(6)   ;
(7)   ;
(8)   ifthen
(9)    ;
(10)    ;
(11)    ;
(12)   else
(13)    ;
(14)    ;
(15)   end if
(16)  end for
(17)end

4. Experiments

In this section, the experimental verification is listed. Firstly, we introduce the experimental arrangement. Secondly, the judgement of robot running state will be investigated. Finally, the localization performance will be studied.

The reference path is marked as red lines in the experimental scene as shown in Figure 2. During the experiment, the robot moved three meters forward and then three meters to the left to reach the end point. It can be seen from Figure 2 that there is little texture in the scene at the turning point of the robot, which is not conducive for the positioning of the visual navigation system.

4.1. Experimental Arrangement

To verify the proposed data fusion algorithm, experimental verification is carried out. The test is done in a building of the University of Jinan, which is shown in Figure 2. The reference path, starting point, and navigation frame are shown in Figure 2. The testbed used in the test includes a mobile robot, an IMU, a vision, a binocular camera, and a computer. For the IMU, we employ the I9DOFv3, which includes ADXRS620, ADXL203, and HMC5983. Among them, the updating rate of ADXRS623 and ADXL203 ranges from 4 Hz to 2000 Hz. The output rate of HMC5983 is 220 Hz, and the size of the IMU is 50  30  15 mm. The highest frame rate of the MYNTAI binocular camera is 60FPS with baseline, 50.0 mm, focal length of 3.3 mm, and size of 124  33.3  32.5 mm. The INS-based robot’s position and posture are measured by IMU. The MYNTAI binocular camera is used in the test, which measures the vision-based position and posture. The computer is used to collect the sensor’s data via software. It should be pointed out that an IMU, a vision, a binocular camera, and a computer are fixed on the mobile robot. For the test, the robot runs following the reference path, which is set up in advance. Then, the IMU and binocular camera measure the mobile robot’s position. The robot and frame used in the experiment are shown in Figure 3. The parameters used by MKF and MKF with multimodel in the experiment are listed in Table 2. For the sampling time, in this test, we set , , and . is set as , , , , and , respectively. is set as , , , respectively.

4.2. Judgement of Robot Running State

In this section, we will investigate the judgement of robot running state. The state of the robot is divided into stationary and motion state, among which motion state is divided into turning and going straight. When the robot is stationary, the yaw angle remains unchanged and the accelerometer value is zero; when the robot goes straight, the yaw angle remains unchanged while the accelerometer value is nonzero; when the robot turns at a fixed point, the yaw angle changes continuously while the accelerometer value stays the same. The turning state of the robot is set to 0 when it is not turning and to 1 during turning. From Figure 4, it can be seen easily that the turning of the robot is easy to judge by using the conditions proposed in this paper.

4.3. Localization Error

The localization performance of the method that we proposed is investigated in this section. To verify the MKF performance with the multimodel method, we compare the localization errors with those values of the INS, vision, and MKF. The reference trajectory, the trajectories estimated by INS, vision, MKF, and MKF with multimodel method are shown in Figure 5. From the figure, we can see easily that the error accumulation appears in INS trajectory, which indicates that the INS is not suitable for long-term navigation. When compared with the INS’s trajectory, the trajectory estimated by the vision gets more close to the reference path in the initial stage.

However, when the robot turns, the motion direction of the vision-based system deviates from the reference trajectory, resulting in huge error. Based on the vision-based measurement, the MKF-estimated trajectory also has huge localization errors after the robot turns. The trajectory estimated by the proposed MKF with multimodel method gets more close to the ground-truth trajectory when bing compared to other methods mentioned above.

To explain the performance of the proposed algorithm in depth, we show the localization errors estimated by the INS, vision, MKF, and MKF with multimodel both in east and north directions, respectively, in Figures 6 and 7. From the figures, it is clear that the proposed MKF with multimodel have the smallest localization errors in both east and north directions. When compared with the proposed MKF with multimodel, both the vision and MKF have huge errors. It should be pointed out that we do not add the INS to the comparison because the INS’s performance is much worse than all other mentioned methods.

The RMSE and mean errors of the vision, MKF, and MKF with multimodel method are listed in Table 3. From the table, one can infer that the proposed MKF with multimodel generates the most accurate localization when compared with the vision and MKF, which lead to the conclusion that our method effectively increases the positioning accuracy.

5. Conclusions

In this work, the indoor vision/INS integrated mobile robot navigation using multimodel-based multifrequency Kalman filter has been proposed. To overcome the shortcomings of insufficient accuracy of visual data when the robot turns, a novel multimodel integrated scheme has been investigated for the mobile robots with Mecanum wheels, which considered that the position of the robot remains unchanged in the process of turning in place. For the data fusion filter, a multifrequency Kalman filter is engaged for fusion of the position information given by both the inertial navigation and the visual navigation systems, which overcomes the problem that the filtering period of the integrated navigation system is too long. The experiments verify that the proposed multimodel-based multifrequency Kalman filter can effectively improve the positioning accuracy.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported in part by the Shandong Key R&D Program under Grants 2019GGX104026 , 2019JZZY021005 and in part by the National Natural Science Foundation of China under Grant 61803175.