Abstract

Accurate real-time kinematics model is very important for the control of a skid-steering mobile robot. In this study, the kinematics model of the skid-steering mobile robots was first designed based on instantaneous rotation centers (ICRs). Then, the extended Kalman filter (EKF) technique was applied to obtain the parameters of ICRs under the same specific terrain online. To adapt to different terrain environments, the fractal dimension-based SFTA (segmentation-based fractal texture analysis) method was used to extract features of different terrains, and the k-nearest neighbor (KNN) method was used to classify the terrains. In the case of real-time terrain recognition, the filter parameters of the EKF for estimating the ICRs are adjusted adaptively. Experiments on a real skid-steering mobile robot show that this method can quickly estimate the kinematics model of the robot in the case of terrain changes, and can meet the needs of practical applications. The average error of odometer estimation based on visual terrain classification is 0.06 m, while the average error of odometer estimation without terrain classification is 0.14 m.

1. Introduction

The skid-steering mechanism is widely used in mobile robots and vehicles [1]. The directions of the relative vehicles are controlled by changing the speed of the left and right wheels or tracks, rather than steering through an independent mechanical steering mechanism. Therefore, this kind of steering mechanism is simple, effective, and robust, and can realize zero radius steering. It is especially suitable for the robots and vehicles operating on all terrains. However, the inherent sideslip of skid-steering brings the complex wheel-ground contact force relationship. Compared with the Ackerman-steering or differential-steering wheeled vehicles, it is more difficult to establish an accurate motion model [26]. The high-performance motion control, trajectory planning, and collaborative control of skid-steering mobile robots require accurate motion models [4, 7]. In the past decade, the researches on accurate motion modeling are mainly divided into two categories.

The first kind of research methods are mainly to establish the model of tire stiffness or track mechanics from the perspective of dynamics [2, 3]. The research in this field often focuses on the force of terrain and motion system. The slip model is the functional relationship between the tire and the traction force. In [8, 9], the authors analyzed and verified the relationship between the traction, traction coefficient, driving torque, and parameters such as wheel radius and width under different slip rates through numerical and experimental methods. The optimized wheel design parameters and the best traction parameters are obtained using the genetic algorithm. However, these parameters were measured offline under specific terrain conditions. In [10], the authors used the unified tire model to establish the dynamic model of the high-speed wheeled mobile robot in the general motion and drift states. The drift motion controller has also been designed based on the model. However, there is a certain gap between the simulation and experimental results. Aiming at the latitudinal dynamics of skid-steering wheel vehicles, the authors in [11] has proposed the two-degrees-of-freedom linear latitudinal dynamics model to study its steering performance under the conditions of oversteer, understeer, and neutral steering. It has been found that the proposed model has good steady-state and transient characteristics under linear conditions [12]. However, the skid-steering wheeled mobile robots are mostly in nonlinear motion, so the use of this linear model is limited. In practical application, because the dynamics model of the skid-steering wheeled mobile robots is relatively complex and the calculation cost is high in the case of real-time control, the dynamics model is usually used for the motion simulation of the robots [2, 3].

The other kind of method is to use the kinematic model to estimate the slip [13]. It is difficult to establish an accurate kinematic model because the motion of the skid-steering robot does not satisfy the relationship of nonholonomic constraints. In [4], the authors estimated the slip using the EKF method to fuse the information of an inertial measurement unit (IMU) and the virtual speed. The experimental results show that when the robot travels about 40 m on a complex road, the position estimation deviation is less than 25 cm. In [14], a delayed-state EKF method was applied to estimate the slip model, which essentially adopted an integral prediction error minimization method. In [15], the authors described the slip model through the experiments. The relationship between the slip coefficient and the turning radius was established by an exponential function. The experimental results showed that the control of a skid-steering vehicle based on this model had good performance. In [16], the kinematic model based on ICRs was used. The position of ICRs was obtained using genetic algorithm to process the recorded experimental data offline. The authors in [17] have analyzed the inconsistency of the speed on the same side of a 4W skid-steering mobile robot. The simulation results have shown its effectiveness, but there is a lack of further experimental verification. In [18], the authors have presented a friction-based kinematic model lies at the region between kinematic models and dynamic models. The parameters of the proposed kinematic model are trained using the nonlinear least squares optimization technique.

Based on the lidar positioning method, the authors in [13] have adopted the model presented in [16] to obtain the functional relationship between the ICRs and the radius of the trajectory and velocity. The experimental results showed that under the same terrain and road surface, the ICRs changed in a small range (1.4–1.5). The results were consistent with the study in [16]. The experimental results showed that the empirical formula model obtained in [13] significantly improved the accuracy of dead reckoning position. However, this experimental method can only obtain the functional relationship offline. In [19], the authors have proposed a method to obtain the ICRs’ values based on EKF online estimation technique. The kinematic model of the robot can be directly obtained by measuring the given speed, yaw angle, and the position of the robot. It should be noted that under the condition of terrain change, the convergence time of the proposed method is very long, even up to 100–150 s, which has great limitations in practical application. The authors in [20] have combined the multi-innovation theory with UKF to estimate the motion for high-speed off-road skid-steering mobile robots, where the position, heading, velocities, and wheel slip can be estimated simultaneously. In [21], a variable Bayesian learning-based adaptive Kalman filter method was designed to obtain better tracking effect on time-varying noise, and higher filtering accuracy than the classical Kalman filter. In this study, the noise of the filter mainly mutates with the change of terrain. Therefore, it is a way to deal with the real-time identification of the kinematic model of the skid-steering mobile robot by establishing the model estimation algorithm under the corresponding terrain conditions. The corresponding terrain conditions can be obtained by terrain classification and recognition method.

There are a lot of researches on the terrain recognition methods for robots [2225]. The terrain classification methods are mainly realized through lidar, vision, or vibration measurement. The vibration-based terrain classification method extracts the feature vector of the signal by analyzing the vibration signals between the wheels of the mobile robot and the ground. The feature vector extraction methods include the statistical features [22], fast Fourier transformation (FFT), and power spectral density (PSD) [24]; and then, the k-nearest neighbors (KNN) method, decision tree-based method, probabilistic neural network (PNN), or support vector machine (SVM) method will be used to realize the terrain classification [26].

Combined with the existing research work [13, 19], this study proposes an EKF-based method to learn the ICRs of the robot. The motion of the robot can be predicted through the kinematic model established by the ICRs. Aiming at the problem that the convergence time of the filter is long under the condition of changing terrain, the unknown terrain surface is classified by visual information and the KNN terrain classification method, where the image features are extracted using the fractal dimension-based SFTA (segmentation-based fractal texture analysis) method [27]. In order to shorten the running time of the algorithm, the area to be classified in front will be estimated according to the kinematic model. At the same time, the initial value of the error covariance matrix of the EKF filter will be adaptively adjusted when the terrain changes; and then, the EKF filter can quickly enter the convergence state. The experimental results show that compared with the traditional genetic algorithm, the proposed method can greatly shorten the convergence time while ensuring the convergence accuracy.

The remainder of this paper is organized as follows: the kinematic model of a 4W skid-steering mobile robot is presented in Section 2. In Section 3, the EKF method to estimate the values of ICRs is given. The visual-based terrain classification method is also presented. The experiment results and analysis are presented in Section 4. Some conclusions are provided in Section 5.

2. ICR Kinematic Model of the 4W Skid-Steering Mobile Robot

For a 4W skid-steering mobile robot, the following model assumptions are adopted:(1)the rotational speeds of all wheels on the same side of the robot are the same;(2)the robot runs on a solid road and all wheels are in contact with the ground;(3)only 2D plane motion is considered; and(4)the robot is symmetric with respect to its sides.

As shown in Figure 1, an inertial frame (global frame) , and a body frame have been defined. The motion of the robot in the body frame and the inertial frame can be expressed as and , respectively. and are the linear velocities. is the angular velocity. and are the two coordinate components of the center of mass in . is the orientation of the local coordinate frame with respect to the inertial frame. Following [1], it has

Let the forward linear speed of left and right wheels be and , respectively. When the mobile robot moves, the ICRs of the left-side tread, right-side tread, and the robot body are denoted as , , and , respectively. The corresponding coordinates in the frame is defined as , , and , respectively. The relationship among , , and the ICRs can be presented as [16]:

For the symmetrical model, the kinematic model of a 4W skid-steering mobile robot can be obtained as:

It can be found that the kinematics for the 4W skid-steering mobile robot can be presented if the three parameters , , and are known. Furthermore, all the effects of dynamics, including the wheel-ground contact force, the distribution of the center of gravity, and the speed of motion, are reflected in these three kinematic parameters. The relatively complex motion state of the system can be described through such a set of parameters. The three parameters , , and can be estimated using the EKF-based method in real time.

3. The Estimation of the ICRs’ Location based on EKF

3.1. The Estimation of the ICRs’ Location under the Same Specific Terrain

In this subsection, the EKF method is used to estimate the ICRs’ location. The state vector is defined as . The process model is established as:where represents an additive Gaussian white noise vector. Each noise in is unrelated to another. To facilitate the calculation in the digital devices, the discrete-time form of Equation (4) can be derived aswhere denotes the sampling period.

The prediction estimation covariance can be derived as:where is the process noise variance matrix. The Jacobian matrix can be derived as:where

Here, . The noise Jacobian matrix can be derived as:

In Equations (8) and (11), and represent the identity matrix and zero matrixes, respectively.

Because the position and the heading angle can be obtained through differential global positioning system measurement, the observation model can be defined as:where the observation noise vector is assumed to be Gaussian white noises. The observation matrix can be obtained as:

Furthermore, the filtering gain can be calculated aswhere is the observation noise variance matrix. Finally, the state and covariance estimates are updated as:

3.2. The Estimation of the ICRs’ Location under Different Terrains

It is assumed that the values of ICRs are obtained by adding random noise on the basis of constant values as shown in Equation (7). When the 4W skid-steering mobile robot travels under the same specific terrain condition, the values of ICRs will not change greatly. However, if the robot shuttles between different terrain environments, such as ceramic tile pavement, concrete pavement, sandy soil pavement, and grassland, the wheel-ground contact parameters will change, which will also lead to changes in the values of ICRs. At this time, the assumption that “ICRs will not change greatly” is no longer valid [19]. In this case, the convergence of the algorithm in Subsection 3.1 cannot be guaranteed.

In order to ensure the convergence speed and not reduce the accuracy, it is necessary to make some adjustments to the EKF method. It can be found that the EKF filter is stable and the error covariance matrix tends to be stable when the robot is on the terrain A. When the robot runs from terrain A to terrain B, the error covariance matrix has a large initial deviation value due to the change of ICRs. Therefore, based on the existing measurement values, motion model, and the robot terrain classification method, a method to quickly judge whether the terrain changes (or the ICRs mutate) is proposed. When the robot runs from terrain A to terrain B, if significant changes in ICRs are detected, the initial value of the error covariance matrix of EKF will be adaptively adjusted at this time, which can greatly shorten the convergence time on the premise that the accuracy remains unchanged.

In some references [24, 25], the terrain classification method based on vibration is adopted. The z-axis acceleration information in the IMU is used to extract the frequency characteristics of acceleration. The classification algorithm is used to realize the terrain classification. Although the terrain classification method based on vibration measurement can effectively classify part of the terrain, it can not obtain the terrain in advance; and it needs to run on the unknown terrain for a period of time to obtain enough sample information, which is used to effectively get the classification information of terrain. The terrain classification method based on vision can classify the terrain that the mobile robot will pass through. It can know the types of all the terrains that the mobile robot will pass through earlier. In this subsection, we consider how to classify the unknown terrain through the images captured by the camera, and estimate the area to be classified according to the kinematic model, so as to shorten the running time of the algorithm.

3.2.1. Image Feature Vector Extraction

In this study, it is assumed that the ground is an ideal plane and the origin of the robot coordinate system coincides with the ground. In this case, the robot coordinate system is a plane coordinate system, and any point on it can be mapped into the image coordinate system. In our experimental platforms, an industrial camera was placed on the top of the robot to photograph the terrain in front from top to bottom. The image format output by industrial cameras is Red-Green-Blue (RGB), and the camera sampling frequency is 30 fps. The relationships between the coordinates in the image pixel coordinate system and the coordinates in the robot coordinate system can be formulated as [28]:where is the transformation function.

The image features are extracted using the fractal dimension-based SFTA method [27]. While using the SFTA method, there are two steps to extract the image feature. Figure 2 shows the process of image feature extraction, with an example being the RGB format of asphalt pavement captured by the camera.

Step 1: The input RGB image is converted into a grayscale image; and then, the grayscale image is decomposed into a group of binary images. The input RGB and grayscale images are defined as and , respectively. The relationship between these two images is:

Here, the TTBD algorithm [26] is used to decompose the grayscale image into a group of binary images. The basic idea of TTBD algorithm is: when the input parameter is , the thresholds , are calculated by multilevel Otsu algorithm [29]. These thresholds minimize the variance of . The binary image is obtained as:where

Then, binary images can be obtained using the TTBD algorithm. In this study, is chosen as .

Step 2: The feature of each binary image is extracted. The feature vector of the input image is composed of the features of each binary image. For each binary image , the image boundary map is calculated as:where are eight-pixel points adjacent to the pixel point . If the value at the pixel point is 1 and its eight adjacent pixels exist, and their values are 0, then . Otherwise, . Therefore, it can be seen that the calculated boundary is one pixel width. After the boundary map has been calculated, the fractal dimension will be calculated using box-counting fractal dimension algorithm. The dimension is used to describe the boundary complexity and structure segmentation of the image.

For each binary image , let the number of nonzero value pixel points be . For nonzero value pixel points, the average value of the corresponding gray values is calculated as . The extracted feature values of each binary image are , , and Finally, the feature vector of the input image is represented by the feature values extracted from the binary images combined in order, that is, . The dimension of the eigenvector is . In our experiment, , so the dimension of the eigenvector is 24.

3.2.2. Improvement of Terrain Classification based on Robot Kinematic Model

In the process of terrain classification calculation, if the feature extraction is based on the whole image, it will take a long time. Since the online estimation of the robot kinematic model is greatly affected by the terrain, the processing error caused by the longer prediction time will be greater when estimating the motion model of the 4W skid-steering mobile robot. In fact, when the 4W skid-steering robot is running, its kinematic model is constantly updated. The target area of image processing can be optimized using the kinematic model at the current time. In this study, according to the control input, the kinematic model is obtained through online estimation. The motion trajectory and travel area of the robot at the next moment can be predicted; and then the range of image feature extraction and classification can be reduced to improve the processing efficiency.

Figure 3 shows the region prediction of visual terrain classification based on kinematic model. It is assumed that the fixed coordinate system of the robot at the current time is , and the speed input is known. The position and direction of the robot at the time can be predicted based on the kinematic model obtained by EKF. The position and direction of the robot are shown in Figure 3. At this time, the centroid coordinate system of the robot is denoted as . The rotation angle and its coordinates in the fixed coordinate system are expressed as and , respectively.

Taking into account the error during the experimental process, we have expanded the dimensions of the robot in the longitude and latitude directions to 1.2 times the original size, denoted as and , respectively. We surround the expanded mobile robot with a rectangle with length and width parallel to the x-axis and y-axis, as shown in the red dashed rectangle in Figure 3. The length and width are computed as:

According to the kinematic model of the robot and the control command obtained by the motion planning, the 4W skid-steering mobile robot will move to the range of the red rectangular box after seconds. In order to determine which terrain the 4W skid-steering mobile robot will move to at a certain time, it is only necessary to classify the area covered by the above-mentioned red rectangular box, while the other areas are not classified. In this way, the classified area will be reduced. Furthermore, the time for image processing and terrain classification will be shortened. The accuracy of classification and motion estimation for the 4W skid-steering mobile robot can be ensured.

3.2.3. Kinematic Model Identification based on KNN for Terrain Classification

The KNN method is used to classify the terrain. Several samples of known terrain categories are selected as training samples. Then, their category voting is used to determine the terrain category of the test samples. The KNN method is described as follows.

The mobile robot travels on each terrain and collects data to obtain the training sample set of the corresponding terrain. The training sample set is recorded as:where is the feature vector of the ith sample, and is the corresponding terrain category. There are kinds of terrains. Thus, ; and the distance between two samples is defined by the Euclidean distance . As described above, the feature vector can be denoted as . Then,

When the robot travels on a specific unknown terrain, the terrain is taken as the sample to be tested. We investigate the first k neighbors of sample to be tested in N training samples, and assume that there are neighbors belong to class . Then, the discriminant function of class is:

The decision rules are:

Through real-time measurement, the KNN method is used to obtain the terrain type of the robot currently traveling. Then, the kinematic model of the mobile robot when the terrain changes will be estimated. If the terrain at this time is different from the terrain at the previous time, and the robot is kept on the new terrain for two consecutive times, it is considered that the working terrain of the robot at this time has changed. Then the error covariance matrix in EKF is adjusted.

is the initial value of large error covariance set in advance, which is determined through multiple experimental debugging. The principle of the whole kinematic model estimation algorithm based on terrain classification of vision data is shown in Figure 4.

4. Experimental Results

The 4W skid-steering mobile robot—a Pioneer P3-AT robot shown in Figure 5 was used for all testing in this study. An industrial camera was placed on the top of the robot to photograph the terrain in front from top to bottom. During the whole experiment, the mobile robot traveled on marble, concrete, asphalt, and grass. The robot collected images and stored the data on the on-board computer under appropriate lighting conditions.

The classification results based on the visual terrain classification method under the four terrains are shown in Table 1. It can be seen that the accuracy rate of classification under the four topologies is above 85%, as shown by the data on the diagonal in Table 1.

In the experiment, the mobile robot travelled from the concrete pavement to the asphalt pavement, and then to the concrete pavement. When the robot runs in the terrain conversion situation, the KNN terrain classification algorithm was also used for classification, and the EKF was used to estimate the ICRs’ parameters. The ICRs’ parameters are used to obtain the model odometer estimation and deviation based on visual terrain classification and no terrain classification.

Figure 6 shows a terrain classification result based on kinematic model with/without under terrain transition. The yellow area was identified as concrete pavement and the white area was identified as asphalt pavement. As the mobile robot moves, the white area became smaller and the yellow area became larger. It indicated that the robot traveled from the asphalt pavement to the concrete pavement. In the 193rd picture, the classification indicated that the current robot was running on the asphalt pavement, while in the 201st picture, it was completely running on the concrete pavement.

Table 2 shows the comparison of the average and maximum time of the operation with and without the terrain classification algorithm based on the kinematic model under the four terrains. The notebook computer used in the experiment is configured with an Intel i7 3.2 GHz processor, 8 GB Ram, and window eight 64-bit operating system. The algorithm was tested under Matlab2010a. It can be seen that when the terrain classification algorithm based on the kinematic model is not adopted, the average operation time under the four types of terrains was more than 2 s. The average operation time of terrain classification algorithm based on kinematic model was less than 0.5 s. The speed was increased by more than 4 times.

The error covariance matrix in EKF is adjusted based on the terrain classification. The adaptive EKF method was used to estimate the ICRs’ parameters to obtain the model odometer estimation and deviation based on visual terrain classification and no terrain classification. The layout of the experimental area was shown in Figure 7. The robot moved on the asphalt pavement for a certain distance before entering the concrete pavement. After moving on the concrete pavement for about 20 s, it then entered the asphalt pavement to move. The built-in kinematic model of P3-AT [13] (abbreviated as P3-AT DF Model) was also used to produce the odometer estimation. The comparison results are shown in Figure 8.

Figure 9 shows at the time of and , the ICRs’ parameters , , , and all change suddenly (as shown by the change of red dotted line in Figure 9). We judge that this is the time of terrain conversion. Figure 8(b) shows the position error estimated by different odometers. The average error of the odometer estimation using visual-based terrain classification was 0.06 m, while the average error of the odometer estimation without terrain classification was 0.14 m. The error was reduced by 57.1%. The method based on visual terrain classification can detect the changes of terrain fast.

5. Conclusions

This paper investigates a terrain adaptive odometry for a 4W skid-steering mobile robot. The odometry calculation is based on the kinematic model using the ICRs. The EKF method is used to estimate the parameters of ICRs in real time. In order to make the Kalman filter on the robot adapt to the terrain changes quickly, a vision-based terrain classification method has been designed to make the robot have the ability to recognize the terrain in real time. Additionally, the forward motion trajectory is predicted based on the kinematic model, which is used to realize the capture of small images. The proposed method reduces the calculation time of image feature extraction. Once the robot knows the terrain change, the relevant error covariance matrix of the Kalman filter will be adaptively adjusted to another value, so that the convergence speed of the filter can be guaranteed.

Furthermore, the estimated ICRs’ parameters will quickly converge to the true value. The real-time kinematic model of the robot will be obtained. The estimation of the odometer will be more accurate.

In [16], under the assumption that the mobile robot is moving on a flat surface, with the speed of each wheel being the same and all in contact with the ground of equal velocity on the same side, Martínez et al. [16] adopted the method of ICR to establish the kinematic model of a skid-steering mobile robot. The sliding effect caused by complex dynamic effects will be reflected in the changes in ICRs’ values. In our study, we have inherited the idea of Martínez et al. [16] regarding the use of ICR to establish the kinematic model for a 4W skid-steering mobile robot. However, in actual terrain environment. the above assumptions cannot be met in real time. At this point, the model has certain errors. In the Kalman filtering algorithm used in the article, Gaussian white noise is introduced into the model, which can reflect certain error information, but cannot fully reflect the real situation. In the future, further research will be conducted on the applicability of this method on nonflat surfaces.

During actual long-term work, the lighting conditions are constantly changing, the lighting conditions will also be considered in our future engineering prototypes.

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was funded in part by the Teacher Training Program of Longyan University.