Abstract

A vision/inertia integrated positioning method using position and orientation matching which can be adopted on intelligent vehicle such as automated guided vehicle (AGV) and mobile robot is proposed in this work. The method is introduced firstly. Landmarks are placed into the navigation field and camera and inertial measurement unit (IMU) are installed on the vehicle. Vision processor calculates the azimuth and position information from the pictures which include artificial landmarks with the known direction and position. Inertial navigation system (INS) calculates the azimuth and position of vehicle in real time and the calculated pixel position of landmark can be computed from the INS output position. Then the needed mathematical models are established and integrated navigation is implemented by Kalman filter with the observation of azimuth and the calculated pixel position of landmark. Navigation errors and IMU errors are estimated and compensated in real time so that high precision navigation results can be got. Finally, simulation and test are performed, respectively. Both simulation and test results prove that this vision/inertia integrated positioning method using position and orientation matching has feasibility and it can achieve centimeter-level autonomic continuous navigation.

1. Introduction

Intelligent vehicle has walked into human’s life and has shown more and more extensive value in the military and civil aspects. Navigation is one of the key technologies of mobile intelligent vehicle and it has been widely researched in recent two decades [13]. With the development of science technology, navigation requirement of intelligent vehicle is higher and higher and the trend is more accurate, more flexible, and of lower cost. Especially for automated guided vehicle (AGV) which is adopted widely in factory field, high precision positioning is the key to determine whether it can finish other tasks excellently [4, 5].

Presently, familiar navigation methods of mobile intelligent vehicle are global position system (GPS), laser navigation, line guidance, electromagnetic guidance, inertial navigation, vision navigation, and so on. These navigation methods have their own particular features and have advantages and disadvantages at the same time. Principle of GPS is that the absolute position of vehicle is calculated through receiving the certain frequency wireless signal from satellites [6]. This navigation method is the most widely used, but it cannot meet the centimeter-level requirement of intelligent vehicle and it is only applicable to the outdoor. Principle of laser navigation is that laser is emitted by a rotating mechanism and the angle of the rotating mechanism will be recorded when it detects the cooperative signpost which is composed of reflectors. Then position and orientation of vehicle can be got according to the angle and the position of cooperative signpost [7, 8]. This is a high cost navigation method. Principle of line guidance or electromagnetic guidance is that line or cable is placed along the running path of vehicle in advance and it can be detected by inductor installed on the vehicle [9]. It makes the vehicle run along the line or cable. This manner can just meet simple navigation requirements. Inertial navigation is an effective autonomous navigation technology and inertial navigation system (INS) can output complete navigation information. Particularly, MEMS INS has advantages of small size, low power consumption, and low cost but its precision is low and measurement error is big and the position error will accumulate with time rapidly. So it must be integrated with external navigation information to get high precision navigation results [10, 11]. Vision navigation is an advanced navigation manner which has developed recently. It gets navigation information from pictures with the methods of image processing, compute vision, pattern recognition, and so on [12]. It has the advantages of high precision and broad sensing area, but its large calculation leads to poor real-time performance. In recent years, vision positioning technology based on artificial landmarks is playing an ever-larger role. Compared to natural landmarks, it avoids large calculation and has higher real-time performance [13]. It is very suitable for structured environment but its output is not continuous. It can be seen that MEMS INS and vision positioning method based on artificial landmarks have obvious complementary characteristics.

Here, vision positioning method based on artificial landmarks is integrated with inertial navigation technology to achieve navigation. This method can effectively realize their complementary advantages. It has the characteristic of high accuracy, low cost, small calculation, and high real-time performance and it can meet the requirements of many complex tasks. The paper introduces this vision/inertia integrated positioning method using position and orientation matching first. Then the needed mathematical models are established and measurement model builds up the connection between vision and inertia. Finally, mathematical simulation and experiment are performed to verify the effectiveness of the proposed method.

2. Vision/Inertia Integrated Method

Vision positioning technology based on artificial landmarks is suitable for structured environment. Before using this method, it is necessary to carry out surveying and mapping on the site and place the landmarks in it. A kind of artificial landmark is shown in Figure 1. The direction of feature points of the artificial landmarks shows the orientation information and the absolute position of the artificial landmark can be embedded into pattern of landmark by encoding. Vision processor can output three parts of information which are the pixel coordinates of landmark in the picture, absolute position of landmark, and orientation. Vision navigation based on artificial landmarks is integrated with MEMS INS to achieve high precision navigation and its working principle diagram is shown in Figure 2. MEMS INS and camera are firmly fixed on the vehicle. INS outputs velocity, position, and attitude of vehicle continuously and camera collects images in real time when the vehicle is moving. If the camera has captured the image of landmark whose direction and position are known, vision processor extracts the orientation and position information from the picture by image processing and landmark recognition. Matching the information with INS outputs can realize high precision navigation positioning of vehicle.

Kalman filter is used to realize integration of INS outputs and landmark information. The schematic diagram of this integration method is shown in Figure 3. The error state model of Kalman filter is established according to MEMS INS error model and measurement model is established from measurement difference of the same parameter between INS and vision. Here, observation consists of two parts. One is the orientation difference between INS output and vision measurement. The other is the difference between the calculated pixel position of landmark and the pixel coordinates collected directly by camera. The calculated pixel position of landmark comes from the position vector in the pixel coordinate system which is projected from the position vector from INS output position to absolute position of landmark. Kalman filter estimates the value of system error states and modifies the navigation errors by feedback correction. And IMU errors can be calibrated and compensated online so that the system can output optimal estimation of navigation parameter.

As is shown in Figure 3, in order to realize integration, the needed mathematic models including MEMS INS error model, vision navigation error model, state-space model of Kalman filter, measurement model, and correction model of navigation errors and IMU errors should be established.

3. Integrated Mathematical Models

3.1. MEMS INS Error Model

When inertial navigation is integrated with other navigation systems through Kalman filter, the state equations of integrated navigation system are commonly established from INS error equations. The dynamic INS error model has long been established and researched in literature such as those in [14, 15], which can be referenced as follows:where is the velocity error vector in the navigation coordinate frame; is the earth rotation rate in the navigation coordinate frame; is the rotation rate from the navigation coordinate frame with respect to the earth coordinate frame; is the attitude error vector; is the accelerometer’s output specific force in the navigation frame; is the accelerometer’s output specific force error in the body frame; is the attitude matrix from body coordinate frame to navigation coordinate frame; is the gyro measurement error in the body frame; is the latitude position error; is the longitude position error; is the height error; and and represent the radii of curvature along lines of constant longitude and latitude, respectively.

MEMS sensor cannot measure the earth rotation rate because of its low precision. And with the limit of artificial landmarks placement, integrated navigation method of INS and vision positioning based on artificial landmarks is only suitable for structure environment which means that the moving area of vehicle is considerably limited. Therefore, the global coordinate frame is selected as the navigation coordinate frame and MEMS INS error model can be expressed aswhere ; it is the position of vehicle in the navigation coordinate frame which is the global coordinate frame.

The errors of inertial components commonly include installation error, scale factor error, and random error. For the sake of this discussion, random error is only considered. Considering the land vehicle application condition, the dynamic range of vehicle is relatively smaller and both the accelerometer and gyroscope errors are considered as the composition of bias (random constant) and white noise. Supposing three direction error models are the same, they can be expressed as where is arbitrary constant, is white noise, is arbitrary constant, and is white noise.

Gyroscopes are directly installed on the vehicle in the strapdown inertial navigation system so the error is measured by gyroscopes in the body coordinate frame. It should be transformed into navigation coordinate frame. Transformations of accelerometers are the same as gyroscopes, and they can be expressed as

3.2. State-Space Model of Integrated System

Error state equation of integration system can be got from (2)–(4) and it can be expressed as

In the equation, is a 15-dimension error state vector which is composed of the navigation errors and inertial sensors errors

And , , and stand for platform error angle; , , and stand for velocity error; , , and stand for position error; , , and stand for the bias errors of the accelerometers; , , and stand for the bias errors of the gyroscopes; is the system process noise which is composed of the white noises of the inertial sensors; is the system noise driving matrix; is the state matrix and it can be written as

3.3. Vision Navigation Model

The outputs of vision navigation system based on artificial landmarks are composed of absolute position of artificial landmark in the navigation frame, the pixel position of landmark in the pixel coordinate frame, and orientation information. The absolute position of landmarks is embedded into the patterns of landmarks which can be extracted from the picture including landmarks collected by camera. Its precision is affected by map precision of place points and here the map errors are not considered temporarily. The pixel coordinate of the landmark and orientation information are directly measured by vision camera and their precision is affected by camera manufacturing precision, camera shaking in vehicle moving process, the light conditions, and so on. If the distance from camera to landmark is short, the relative errors are not obvious. So we can think that the pixel position and orientation information measured by vision are just affected by white noise and vision navigation model can be written aswhere is the pixel position of artificial landmark collected directly by vision and is vision output orientation. and are white noise.

3.4. Measurement Model

Measurement model reflects the relationship between observation and state and it is the link between inertial navigation and visual navigation. As discussed previously, both INS and vision can measure the pixel position of landmarks and orientation. And their difference is the observation of integrated system. Therefore, the observation has two parts; one is the pixel position difference of artificial landmark between INS measurement and vision camera collection and the other is orientation difference between INS output and vision output, which can be expressed aswhere stands for the calculated pixel position of artificial landmark from INS measurement and is INS output orientation.

Measurement model of integrated system can be given by

From (10) we can see that measurement matrix has established the relationship of orientation and pixel position of landmarks between INS and vision. The measurement matrix is derived in the following.

(1) Position Observation. The position of landmarks in the pixel coordinate system can be calculated from the vehicle position in the navigation coordinate frame which is given by INS and landmark position in the navigation coordinate frame which is extracted from the image by vision processor. The goal is to get position observation by comparing it with the pixel position of landmarks collected by camera. Figure 4 shows the relative coordinate systems definition. Assume that is the landmark point; is the focus of camera; is the distance from camera to ground; is the vehicle position in the navigation coordinate frame which is given by INS; is the landmark position which is encoded in the landmark pattern. Landmark position in the camera coordinate system can be got by transforming the position vector with strapdown matrix and installation matrix , between IMU and camera. Then, the pixel position of artificial landmark can be given by projecting to pixel coordinate system.

The relationship between the pixel position of landmark and the vehicle position of INS output can be given by

In the equation, , is the camera internal parameter matrix, , are constant, and is the camera main point coordinate in the pixel coordinate system. The height between camera and landmark is unchanged basically after the camera is installed on the vehicle vertically. So can be replaced with camera installation height .

Various errors will be included in the actual navigation calculation. Assume that is vehicle position including error which is calculated by INS; is landmark position including map error; is the strapdown matrix including error; , are the camera installation matrixes including error; is the camera internal parameter matrix including error; is the landmark position in the camera coordinate system including error and their relationship can be given by

Assume that the camera has been installed and been calibrated; we only consider the position error and attitude error of INS outputs and ignore the installation error between camera coordinate system and body coordinate frame, map error of landmark placement, and camera internal parameter error. So (12) can be simplified as

In the equation

Substituting (14) into (13), unfolding (13), and ignoring high-level minim, then (13) can be rewritten as

Therefore, the pixel position error of landmark measured by MEMS INS can be expressed as

The position observation can be written as

(2) Orientation Observation. Both the vision positioning method based on artificial landmarks and MEMS INS can measure the orientation. And the orientation difference between them is the second part of observation of integrated navigation system. The orientation of INS output is ratio of elements from attitude matrix so the error of attitude matrix will be brought to orientation error. The real orientation of vehicle is and the INS output orientation error is so the INS output orientation can be expressed as

Attitude matrix can be written as

The attitude matrix including error unfolds and it can be written as

The relationship between attitude angles and elements in attitude matrix can be given by

After unfolding by Taylor series and ignoring high-level minim, the following equation can be got from (21) [16]:

Therefore the orientation observation can be expressed as

Finally, measurement matrix can be expressed as where

3.5. Correction Model

The navigation errors and inertial components errors can be estimated in the MEMS INS/vision integrated process and they can be used to compensate the system online.

(1) IMU Errors Correction. , , and , , are gyroscope outputs before and after correction, respectively. , , and , , are accelerometer output before and after correction, respectively. , , and , , are error estimation of gyroscope output and error estimation of accelerometer output, respectively. After estimating errors of inertial components, the inertial components outputs can be corrected by

(2) Navigation Errors Correction. The relationship between real attitude matrix and calculated attitude matrix can be expressed as

After estimating errors of platform angle, the attitude matrix can be corrected by the above equation and then corresponding angles can be calculated according to .

The relationship among real velocity , output velocity , and velocity error can be expressed as

After estimating errors of velocity, the velocity output can be corrected by the above equation.

The relationship among real position , output position , and position error can be expressed as

After estimating errors of position, the position output can be corrected by the above equation.

4. Simulation

The mathematical simulation is conducted to verify the proposed vision/inertia integrated positioning method using position and orientation matching. According to typical maneuvers of AGV, the movement track of the vehicle in the simulation is set as follows:(1)Accelerate with an acceleration rate of 0.05 m/s2: 10 s.(2)Move with a stable velocity: 10 s.(3)Turn 90 degrees left with an angular rate of 9°/s: 10 s.(4)Move with a stable velocity: 30 s.(5)Turn 90 degrees left with an angular rate of 9°/s: 10 s.(6)Move with a stable velocity: 60 s.(7)Accelerate with an acceleration rate of −0.05 m/s2: 10 s.

In the whole movement, the calculation frequency of INS is set as 100 Hz and the update cycle of vision information of artificial landmark is set as 3 s. In the simulation, initial errors, the IMU errors, and vision senor errors are set as follows:(i)Initial position errors:(a)-axis position error: 3 cm;(b)-axis position error: 3 cm.(ii)Initial attitude errors:(a)orientation error: 1.5°;(b)pitch error: 0.08°;(c)roll error: 0.08°.(iii)Gyroscope:(a)bias: 35°/h;(b)noise: 30°/√h.(iv)Accelerometer:(a)bias: 1 mg;(b)noise: 500 ug/√Hz.(v)Vision sensor:(a)pixel noise: 5 pixel/√Hz;(b)angle noise: 0.4°/√Hz.

The related results of simulations are illustrated in Figures 517. Figures 57 show the estimates of gyroscope bias in integrated navigation. Figures 8-9 show the estimates of the -axis and -axis accelerometer bias in integrated navigation. Figures 10-11 show the -axis and -axis velocity error in the integrated navigation. Figures 1214 show the orientation error and level attitude errors in integrated navigation. Figures 1517 show the -axis position error, the -axis position error, and level position error in the integrated navigation.

From the simulation results, we can find the following useful results. As shown in Figures 59, the IMU errors can be estimated in the process of navigation. The estimation value of -axis and -axis gyroscopes bias is about 34°/h. The estimation value of -axis gyroscopes bias is about 38°/h. The estimation value of -axis accelerometer bias is about 950 ug. The estimation value of -axis accelerometer bias is about 850 ug. It can be seen from above that this is a “deep-integrated” method and can estimate the IMU errors effectively. As shown in Figures 10-11, velocity error will be restrained in about 0.02 m/s by using this integrated method. As shown in Figures 1214, this method can effectively estimate and compensate initial attitude errors. Orientation error will be restrained in 0.3 degrees and level attitude error will be restrained in 0.03 degrees. This method can get high precision velocity and attitude. As shown in Figures 1517, level position error will be restrained in 4 centimeters. The current cycle of vision information correction is 3 s and it will get higher navigation precision if the distance of landmarks placement is reduced which means the cycle of vision information correction is reduced. From the simulation results, this vision/inertia integrated positioning method using position and orientation matching can realize the “deep-integration” of MEMS INS and vision positioning method based on artificial landmarks. It is a fully autonomous navigation method and it can realize the centimeter-level navigation positioning.

5. Experiment

A vehicle test is conducted to demonstrate the validity of the new integrated navigation method proposed in this work. The MEMS IMU and camera are simultaneously installed on the test vehicle, just as shown in Figure 18. The MEMS IMU can output three axes’ angular rate and accelerator at the same time and its update frequency is set at 100 Hz. The gyroscope error is 40°/h and the accelerator error is 800 ug. The resolution of the camera used in test is and image update frequency is 30 frames per second. The installation direction is vertically downward and the installation height is 50 cm. When the camera has captured the landmark, vision processor can extract the orientation, the pixel position of landmark, and the absolute position of landmark which is encoding in the landmark pattern from the picture. The side length of square contour of landmark is 7 cm and placement distance is 1.2 m. The length of the vehicle is about 55 cm and the width of vehicle is about 20 cm. The speed of vehicle is controlled at about 1.5 m/s. 15 landmarks are placed on the navigation path in a straight line. In order to get navigation errors, 6 datum marks are set on the running path of test vehicle. Navigation errors can be got from the difference between navigation output and datum mark position when the vehicle passes the datum mark.

When the vehicle reaches a datum mark, the navigation errors are recorded. The correlative results are listed in Table 1.

From the experiment results of the vehicle tests, we can find that navigation errors are restrained in 5.5 cm. It means that the proposed integrated navigation method in this work is validated and this vision/inertia integrated positioning method using position and orientation matching can achieve the centimeter-level navigation positioning.

6. Conclusion

A vision/inertia integrated positioning method using position and orientation matching which can be adopted on intelligent vehicle such as automated guided vehicle (AGV) and mobile robot is proposed in this work. Vision processor extracts the azimuth and position information from the pictures which include artificial landmarks. Inertial navigation system (INS) calculates the azimuth and position of vehicle in real time and the calculated pixel position of landmark can be computed from the INS position results. Integrated navigation is implemented by Kalman filter with the observation of orientation and the calculated pixel position of landmark. Navigation errors and inertial measurement unit (IMU) errors are estimated and compensated online. Both simulation and test results prove that this vision/inertia integrated positioning method using position and orientation matching has feasibility and it can achieve the “deep-integration” of MEMS INS and vision positioning method based on artificial landmarks. It is a fully autonomous navigation method and it can realize the centimeter-level navigation positioning.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.