Journal of Sensors

Volume 2016, Article ID 8729895, 11 pages

http://dx.doi.org/10.1155/2016/8729895

## Camera Space Particle Filter for the Robust and Precise Indoor Localization of a Wheelchair

^{1}Universidad Autonoma de San Luis Potosi, 78290 San Luis Potosi, SLP, Mexico^{2}Gannon University, Erie, PA 16541, USA

Received 19 December 2014; Accepted 25 March 2015

Academic Editor: Changhai Ru

Copyright © 2016 Raul Chavez-Romero et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper presents the theoretical development and experimental implementation of a sensing technique for the robust and precise localization of a robotic wheelchair. Estimates of the vehicle’s position and orientation are obtained, based on camera observations of visual markers located at discrete positions within the environment. A novel implementation of a particle filter on camera sensor space (Camera-Space Particle Filter) is used to combine visual observations with sensed wheel rotations mapped onto a camera space through an observation function. The camera space particle filter fuses the odometry and vision sensors information within camera space, resulting in a precise update of the wheelchair’s pose. Using this approach, an inexpensive implementation on an electric wheelchair is presented. Experimental results within three structured scenarios and comparative performance using an Extended Kalman Filter (EKF) and Camera-Space Particle Filter (CSPF) implementations are discussed. The CSPF was found to be more precise in the pose of the wheelchair than the EKF since the former does not require the assumption of a linear system affected by zero-mean Gaussian noise. Furthermore, the time for computational processing for both implementations is of the same order of magnitude.

#### 1. Introduction

Recently, the use of diverse types of sensors and different strategies for information fusion has allowed important developments in key areas of robotic and artificial intelligence. Within these disciplines, a specific area of investigation is mobile robotics where the sensor-based localization problem is an important research topic. Localization of an autonomous mobile robot is the main concern of a navigation strategy since it is necessary to know precisely the actual position of the mobile robot in order to apply a control law or execute a desired task. In general, a navigation system requires a set of sensors and a fusion algorithm that integrates the sensors information to reliably estimate the pose of the mobile robot. One of the most commonly used sensors in wheeled mobile robots is odometers (dead reckoning). Unfortunately, these sensors are subjected to accumulated errors introduced by wheel slippage or other uncertainties that may perturb the course of the robot. Therefore, odometric estimations need to be corrected by a complementary type of sensor. Reported works in autonomous robots present approaches where the odometry sensors information is complemented with different types of sensors such as ultrasonic sensor [1–3], LIDAR (Light Detection and Ranging) [4–8], digital cameras [9–12], magnetic field sensor [13], global position system (GPS) [7, 8, 14, 15], and Inertia Measurement Units (IMUs) [7, 15].

Among the different types of sensors there exist advantages and drawbacks depending on the general application of the mobile robots considered. GPS systems are low-cost systems and relatively easy to implement but have low accuracy and their use is not convenient for indoor environments. IMUs are relatively inexpensive, easy to implement, and efficient for outdoor and indoor conditions but are very sensitive to vibration-like noise and are not convenient for precise applications. LIDAR sensors have high accuracy and are robust for indoor and outdoor applications, with acceptable performance in variable light conditions; however, LIDAR sensors are expensive and the data processing is complex and time consuming. Camera sensors are inexpensive and easy to implement and are with an important amount of tools for images processing and analysis. Although vision sensors are sensitive to light and weather conditions, the use of vision sensors for indoor-structured environments with controlled light conditions is very reliable.

When several sensors are implemented in a single intelligent system (e.g., a mobile robot), it becomes necessary to implement a strategy to fuse the data from every sensor in order to optimize information. To this end, combining sensor data is usually called sensors fusion. With respect to the localization problem, the specialized literature reported several fusion strategies [16]. These techniques can be classified in heuristics algorithms (e.g., genetic algorithm and fuzzy logic) [3], optimal algorithms (Kalman Filter and grid-based estimations) [15], and suboptimal algorithms. Real-world problems normally utilize suboptimal Bayesian filtering, such as approximated grid-based estimations, Extended or Unscented Kalman Filters [9, 12, 17–19], and particles filtering methods [1, 20]. Due to their ability to perform real-time processing and reliability, Kalman-based fusion techniques are implemented in many cases, under the assumption that the noise affecting the system is zero-mean and Gaussian. However, for the case of a robotic wheelchair such assumption is rather strict and is not always satisfied [21].

Wheelchairs, unlike specialized mobile robots, show many uncertainties related to their inexpensive construction, foldable structure, and nonholonomic characteristics of the wheels. Hence, nonlinear and non-Gaussian assumptions become important for these low-end vehicles, where pose uncertainty can be a consequence of differing wheel diameters, as well as gross misalignment, dynamic unbalances, or other problems due to daily use and factory defects. Thus, the filtering strategy is crucial in order to minimize all the uncertainties that are not considered in an ideal mathematical model.

Considering the special case where the mobile robot is a wheelchair intended to be used by a severely disabled person, the literature offers some good examples of fusions between different sensors using nonoptimal algorithms. In [22], the wheelchair is controlled by the user through special devices where a brain-machine interface control is proposed for semiautonomous drive. In [13], a line-follower-like wheelchair moves autonomously, updating its position through metal-artificial markers and RFID tags. In [5, 23], a vision-based autonomous navigation control using an EKF and artificial markers is described. In [1], an autonomous wheelchair is developed using odometry and ultrasonic sensor measurements to be fused using a PF.

In this work, a vision-based particle filter (PF) fusion algorithm is proposed and compared with a Kalman-based algorithm. Particle filters are able to work on the assumption that nonlinear systems are affected by non-Gaussian noise but can be computationally intensive. A novel implementation of a PF on sensor space, called camera space particle filter (CSPF), is used to combine visual observations with sensed wheel rotations mapped onto a camera space through an observation function. A main contribution of this project is the novel strategy of the CSPF implementation. The CSPF fuses the data from odometry and vision sensors on camera space resulting in a precise update of the wheelchair’s pose. The particles used by the CSPF are a set of odometry estimations each with random initial conditions. Every first estimated position is mapped into a camera space through an observation function. In this work, the PF is performed in the sensor space with every visual measurement. Using this strategy, the computational demand is reduced considerably since the filtering is applied to a set of horizontal pixel positions and a single marker observation (point of interest), avoiding the need for exhaustive image processing. The computational processing time of the implementation here presented is shown to be of the same order of magnitude as a typical Kalman Filter implementation.

#### 2. Methods and Materials

In this work, only the nominal kinematic model is required to estimate the position of the wheelchair. If the kinematic model considers only non-holonomic constraints, real-world disturbances such as slipping and sliding of the wheels with respect to the ground (or any error from other sources) are not taken into account and therefore must be corrected. To update the wheelchair positions, the kinematic model is coupled with a pinhole camera model. Applying the solution here proposed, observations of passive artificial markers at known positions are used to estimate the wheelchair physical position through an observation function that maps from the planar physical space to camera space. Next, the kinematical model and the approach used to set up the observation function are first reviewed followed by the camera space particle filter algorithm description.

##### 2.1. Navigation Strategy

In this approach a metric map is built from a training stage. Here, a person drives the wheelchair through a desired path where different estimated positions of the wheelchair are recorded. Based on the acquired metric map, the wheelchair moves autonomously following the instructions recorded in the training stage. For both stages, either when the wheelchair builds the metric map or when the wheelchair tracks the reference path, the wheelchair estimates its position based on the CSPF here proposed. This strategy is convenient for disabled users since it is not necessary to visit every place inside the work area as proposed in other approaches used in mobile robotics typically used for exploration such as Simultaneous Localization and Mapping (SLAM) [6].

##### 2.2. Kinematic Model

The kinematic model used for controlling the wheelchair is a unicycle. The reference point used to identify the position of the wheelchair in the physical space is assumed to be at the middle of the traction wheels axis; see Figure 1. The and coordinates are considered with respect to an inertial fixed reference frame . is the radius of traction wheels and is the distance between the wheels. The wheelchair’s orientation angle is represented by , where defines the average between the right and left driving wheels’ rotation angles and , respectively:The position of is obtained from integrating the following kinematic system of equations:the variable can be defined as a function of the differential forward rotations of the two driving wheels:the state of the system is defined by the wheelchair’s pose . In general, (2) can be expressed compactly asthus, (4) can be solved by odometry integration.