Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2015, Article ID 347379, 7 pages
Research Article

Kalman Filter Sensor Fusion for Mecanum Wheeled Automated Guided Vehicle Localization

School of Mechanical Engineering, Pusan National University, Jangjeon-dong, Geumjeong-gu, Busan 609-735, Republic of Korea

Received 1 December 2014; Accepted 14 January 2015

Academic Editor: Guangming Song

Copyright © 2015 Sang Won Yoon et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


The Mecanum automated guided vehicle (AGV), which can move in any direction by using a special wheel structure with a LIM-wheel and a diagonally positioned roller, holds considerable promise for the field of industrial electronics. A conventional method for Mecanum AGV localization has certain limitations, such as slip phenomena, because there are variations in the surface of the road and ground friction. Therefore, precise localization is a very important issue for the inevitable slip phenomenon situation. So a sensor fusion technique is developed to cope with this drawback by using the Kalman filter. ENCODER and StarGazer were used for sensor fusion. StarGazer is a position sensor for an image recognition device and always generates some errors due to the limitations of the image recognition device. ENCODER has also errors accumulating over time. On the other hand, there are no moving errors. In this study, we developed a Mecanum AGV prototype system and showed by simulation that we can eliminate the disadvantages of each sensor. We obtained the precise localization of the Mecanum AGV in a slip phenomenon situation via sensor fusion using a Kalman filter.

1. Introduction

The Mecanum AGV automated guided vehicle (AGV), which is mounted to a Mecanum wheel that is roller-attached to the axis of rotation with angle of 45°, can move in any direction. Using this special wheel structure, the Mecanum AGV can move in a narrow space and avoid obstacles easily. Thus, it can reduce process time in factory automation. The Mecanum AGV requires an autonomous navigation system to operate in a factory automation environment. In this autonomous navigation system, the core technology is indoor localization. However, a conventional method for a Mecanum AGV has some limits in localization processes, such as the slip phenomenon. Because of the special structure in which the roller is attached to the axis of rotation, the Mecanum AGV frequently slips in the variations of road’s surface and ground friction. A conventional method such as the dead reckoning method has accumulated errors because of the inevitable slip phenomenon of the Mecanum wheel [1].

Laser navigation systems have been used in AGV localization sensors. However, the sensor’s price is very expensive and the response time is very slow, so a laser system is inappropriate in an indoor navigation system [2]. Other methods for absolute localization include radio frequency identification (RFID) [3], which is an active badge system using infrared light developed at AT&T Labs [4], MIT’s cricket system based on ultrasonics [5], and Ubisense Company’s Ubitag based on UWB [6]. However, there is no absolute solution regarding localization methods. The sensor fusion method for the mobile robot localization uses a Kalman filter [7, 8] and a particle filter [9, 10]. These methods are based on the Bayesian filter [11]. Many researchers have studied sensor fusion technique using two or more sensors for mobile robot localization; for example, Lee et al. used laser and encoder [12] and Rigatos used sonar and encoder [13]. In this paper, the StarGazer localization sensor (HAGISONIC Co. [14]) was used. As shown in Figure 1, this image sensor analyzes an infrared ray image that is reflected from a passive landmark with an independent ID. This image-based sensor has the advantage of absolute position sensing of the mobile robot. However, moving errors and unexpected errors occur because of landmark misrecognition [15, 16]. On the other hand, the localization method using ENCODER generates accumulated errors, but there are no moving errors or unexpected errors. Thus, we obtained the advantages of both types of sensor complementary by using Kalman filter sensor fusion. Also, we prove the precise localization of the Mecanum AGV in inevitable slip phenomenon situation by simulation.

Figure 1: Principle of StarGazer operation.

This paper is organized as follows. In Section 2, we describe our analysis of the kinematic modeling of the Mecanum AGV. In Section 3, we describe the Kalman filter sensor fusion algorithms and system modeling. In Section 4, we evaluate sensor fusion algorithms using MATLAB simulation. Finally, our conclusions are given in Section 5.

2. Kinematics Modeling of the Mecanum AGV

A coordinate of a Mecanum AGV with a Mecanum wheel is shown as an “” shape in the floor plan in Figure 2. Mecanum AGV kinematics were analyzed for each coordinate [15].

Figure 2: Coordinate of Mecanum AGV.

We drew the kinematics modeling for Mecanum AGV localization. On the plane, the velocity of the Mecanum AGV can represent the linear velocity of each Mecanum wheel . That is, the Jacobian equation of the Mecanum AGV kinematics model can represent . Also, the reverse inverse Jacobian equation can represent .

We obtain (1) by representing the matrix equation as :

To obtain the inverse of asymmetric matrix , we used a pseudoinverse matrix :

Therefore, the Jacobian equation of   is given by

Each wheel’s linear velocity is the product of the angular velocity and the radius of the wheel. Thus, , where is Mecanum wheel radius and is the wheel’s angular velocity. This equation can be represented as

Now, is a moving coordinate so we must translate the reference coordinate using a transformation matrix: We can obtain the position by integrating this velocity of the reference coordinate . This method is called dead reckoning, and a conventional mobile robot position system typically uses this method.

3. Kalman Filter Sensor Fusion Algorithms for Localization

The dead reckoning method described in Section 2 has inevitable accumulated errors because of the Mecanum wheel’s mechanical structure and variations in the road’s surface. So, this approach can be used for short distances but it cannot be used for long distances and path following. Generally, long distances and path following use a sensor fusion technique using dead reckoning and another sensor for localization [16].

A Kalman filter operates recursively on streams of noisy input data to produce a statistically optimal estimate of the underlying system state. The filter is named after Rudolf (Rudy) E. Kálmán, one of the primary developers of its theory [17]. The Kalman filter has numerous applications in technology. Common applications include guidance and the navigation and control of vehicles, particularly aircraft and spacecraft.

The Kalman filter algorithm consists of four processes, as shown in Figure 3. The algorithm includes prediction and estimation functions.

Figure 3: Kalman filter algorithm.

(1) Estimation. This is the first step, shown in Figure 3. Input value using prior estimation value and covariance , and finally calculate estimation value . These values will be used in the prediction step.

(2) Prediction. This involves the second, third, and fourth steps shown in Figure 3. The final value of these steps is an estimation value and the covariance . Input values use estimation value of estimation step result and measurement value .

Here covariance is criterion, that is, difference between real value and estimation value of Kalman filter:

This means variables mean normal distribution that average value is and covariance is . Kalman filter algorithms choose estimation value using probability distribution of estimation value which becomes the maximum probability value.

Figure 4 shows a Kalman filter sensor fusion-based encoder and StarGazer for Mecanum AGV localization in which the system model is ENCODER and the observation model is StarGazer.

Figure 4: Diagram for Kalman filter sensor fusion.

The Kalman filter discrete system modeling and observation modeling based on Jacobian equation (3) of the kinematics modeling of the Mecanum AGV are described as follows.

(i) ENCODER-Based System Modeling. Considerwhere is control input , which is the angular velocity measured by ENCODER. The is the moving coordinate. So, using the transformation matrix, a transform is made to the reference coordinate system:

(ii) StarGazer-Based Observation Modeling. The observation model is the StarGazer sensor-based position value . Observation model is designed by including disturbance :where the experiment value is the measurement noise of StarGazer following . Following is a normal distribution value where the mean value is 0 and the standard deviation is 0.1.

The system model and observation model use Kalman filter sensor fusion, where the position acquired by StarGazer is in the measured value and (8) represents the system model. Therefore, the Kalman filter sensor fusion attempts to eliminate the StarGazer position error-based ENCODER system model using Kalman filter algorithms.

4. Experiment and Simulation Results

4.1. Mecanum AGV Localization System Prototype

Figure 5 shows our prototype of the Mecanum AGV for localization. MyRIO (NI Co.) was used for sensor data acquisition. We developed a code to allow ENCODER and StarGazer sensor data to use LabVIEW.

Figure 5: Prototype of Mecanum AGV.

Specifications for the chassis, wheels, and motor are shown in Table 1. The Mecanum wheel and roller were made of aluminum and synthetic rubber, and the gear ratio is 64 : 1. Two MAI-2MT-DC drivers were used as drive motors. The optical encoder has 12 CPR (count per revolution) resolution. StarGazer obtains position data ten times per second by RS232 using MyRIO.

Table 1: Specification of Mecanum AGV.
4.2. Localization by ENCODER

The localization experiment using ENCODER was composed that Mecanum AGV drives 2 m width and height square path and then integrates the velocity of the reference coordinate o of (5). Figure 6 shows that the localization by ENCODER has accumulation errors because of integration errors and slip phenomenon. Nevertheless, ENCODER has the advantage that there is no dead-zone, and there are no moving errors.

Figure 6: Mecanum AGV localization for ENCODER.
4.3. Localization by StarGazer

In this localization experiment, four landmark IDs are arranged in a square layout as shown in Figure 7. Each landmark space out 1.8 m, and each landmark can measure data within a 2 m radius. StarGazer detected one Landmark with respect to another Landmark. Then, the position value was calculated for the relative coordinate of each landmark.

Figure 7: Landmark ID of StarGazer.

Figure 8 shows position data calculated by StarGazer. The Mecanum AGV drove 2 m space square path. A total of 30 experiments were performed, and four experiment cases are shown in Figure 8. The average moving error is less than about 10 cm. As shown in Figure 8, unexpected large position errors occurred. This is caused by StarGazer that can be confused in overlapped area between one landmark and another. In experiment average unexpected large position errors occurred 1.4 times of 30. These large position errors caused unscented moving of the AGV. Because of these errors, misunderstanding positions of AGV causes wrong path following. So, these terms must be deleted in order to follow the correct path. StarGazer, in comparison to ENCODER, can calculate the absolute value of the mobile robot with accumulated errors.

Figure 8: Mecanum AGV localization for StarGazer.
4.4. Kalman Filter Sensor Fusion Simulation Result

In this experiment, the Mecanum AGV drove a square path (width 2 m, height 2 m) in an area with dimensions of 4 m × 4 m. The measured values acquired by ENCODER and StarGazer were used in simulation by MATLAB. Then, the localization value from ENCODER by integrating (7), the localization value from StarGazer, and the Kalman filter sensor fusion value were compared.

As shown in Figure 9, the green line represents the localization value by ENCODER and the blue line represents the localization value by StarGazer. The red line represents the Kalman filter sensor fusion localization value. Sensor fusion can delete accumulated error of ENCODER and the large position error because of the landmark misrecognition problem. These results mean that localization errors by StarGazer were deleted from using Kalman filter system modeling based on Mecanum AGV kinematics model (9). The resulting estimation path from the Kalman filter sensor fusion deviated from the 2 m square path due to the Mecanum AGV slip phenomenon.

Figure 9: Square path experiment results of Mecanum AGV.

5. Conclusion

This paper was written for sensing precise localization values of Mecanum AGV nevertheless unavoidable slip phenomenon. To overcome this phenomenon, we used two sensors: StarGazer and ENCODER. StarGazer can measure absolute localization values but produces large errors because of landmark misrecognition. ENCODER does not have moving errors and there is no dead zone, but there are accumulated errors because of the integrating term and slip phenomenon. A Kalman filter was also used to obtain the advantages of both types of sensors. Our simulation results show that the Kalman filter sensor fusion method can delete accumulated errors of ENCODER and StarGazer moving error and big error caused landmark misrecognition. Mecanum AGV for autonomous driving can move narrow path and sideway moving can easily approach conveyer line. For autonomous driving, core technology is localization. This method can suggest the Mecanum AGV localization solution to overcome such as unavoidable slip phenomenon by sensor fusion ENCODER and StarGazer. The proposed method in this paper is expected to bring innovation to factory automation. Future research topics include path tracking, path following, and the map building process method using Mecanum AGV.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


  1. J.-M. Kim, S.-B. Woo, J.-Y. Kim, J.-C. Do, S.-S. Kim, and S.-I. Bae, “Inertial navigation system for an automatic guided vehicle with Mecanum wheels,” International Journal of Precision Engineering and Manufacturing, vol. 13, no. 3, pp. 379–386, 2012. View at Publisher · View at Google Scholar · View at Scopus
  2. J. Park, Localization based sensor fusion for driving control of robot with mecanum wheel [M.S. thesis], Pusan National University, Busan, Republic of Korea, 2011.
  3. L. Jing and P. Yang, “A localization algorithm for mobile robots in RFID system,” in Proceedings of the International Conference on Wireless Communications, Networking and Mobile Computing (WiCOM '07), pp. 2109–2112, September 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. R. Want, A. Hopper, V. Falcao, and J. Gibbons, “The Active badge location system,” ACM Transactions on Information Systems, vol. 10, no. 1, pp. 91–102, 1992. View at Publisher · View at Google Scholar · View at Scopus
  5. N. Priyantha, A. Chakraborty, and H. Balakrishnan, “The cricket location-support system,” in Proceedings of the 6th Annual International Conference on Mobile Computing and Networking (MobiCom '00), pp. 32–43, August 2000. View at Publisher · View at Google Scholar
  7. S.-P. Kim, Essential Kalman Filter, A-JIN, Seoul, Republic of Korea, 3rd edition, 2014.
  8. R. Negenborn, Robot localization and kalman filters: on finding your position in a noisy world [M.S. thesis], Utrecht University, Utrecht, The Netherlands, 2003.
  9. F. Gustafsson, F. Gunnarsson, N. Bergman et al., “Particle filters for positioning, navigation, and tracking,” IEEE Transactions on Signal Processing, vol. 50, no. 2, pp. 425–437, 2002. View at Publisher · View at Google Scholar · View at Scopus
  10. A. Doucet, N. de Freitas, and N. Gordon, Sequential Monte Carlo Methods in Practice, Springer, New York, NY, USA, 2001. View at Publisher · View at Google Scholar · View at MathSciNet
  11. D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Bordello, “Bayesian filtering for location estimation,” IEEE Pervasive Computing, vol. 2, no. 3, pp. 24–33, 2003. View at Publisher · View at Google Scholar · View at Scopus
  12. I. S. Lee, J. Y. Kim, J. H. Lee, and S. Kim, “Kalman filter-based sensor fusion for improving localization of AGV,” Advanced Materials Research, vol. 488-489, pp. 1818–1822, 2012. View at Publisher · View at Google Scholar · View at Scopus
  13. G. G. Rigatos, “Extended Kalman and particle filtering for sensor fusion in motion control of mobile robots,” Mathematics and Computers in Simulation, vol. 81, no. 3, pp. 590–607, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. M.-S. Chae and T.-G. Yang, “A study on precise localization for mobile robot based on artificial landmarks in the ceiling,” Journal of Advanced Information Technology and Convergence, vol. 9, no. 8, pp. 85–92, 2011. View at Google Scholar
  16. S.-J. Park and T.-g. Yang, “A study on design of intelligent mobile robot based on localization sensor for unmanned transport logistics,” Journal of KIIT, vol. 11, no. 9, pp. 7–13, 2013. View at Google Scholar
  17. S. G. Tzafestas, Introduction to Mobile Robot Control, Elsevier, London, UK, 1st edition, 2014.