Review Article

Review of Neurobiologically Based Mobile Robot Navigation System Research Performed Since 2000

Table 1

Neurobiologically based navigation research.

Authors/articlesPlatform/sensorsVisual capabilitiesBrain cells emulatedCognitive mapRoute planning and autonomy ()

Arleo and Gerstner [12]() Khepera mobile robot
platform.
() 8 IR sensors,
obstacle detection.
() Light detector,
ambient light measure.
() Camera 90° H, self-localization.
() Odometer, self-motion.
(i) Offline, unsupervised, Hebbian learning, neural network (NN) training.
(ii) Four 90° horizontal snapshots taken (N, W, S, E) to create a single, location recognizable view.
(iii) Used primarily to assist with robot NN directionality.
(i) Place cells.
(ii) Head direction (HD) cells.
Built into NNs of place cells & head direction cells (use of external homing light and offline NNs).Use of external computer, thus not autonomous.

Fleischer et al. [27]() BBD, Beowulf cluster.
() Robot platform:
(a) CCD camera.
(b) Compass.
(c) Laser range finder.
(d) Whisker system.
(e) Odometer.
(i) Transformation of RGB video data (320 × 240 pixels) to YUV color space on one of the cluster computers. After some processing, interfacing of color neuronal units to inferotemporal cortex and edge units to parietal cortex. (i) Place cells.
(ii) Dentate gyrus.
(iii) EC and other medial temporal lobe cells.
Limited movement in plus-maze.
Directional choice at intersection is learned by place cells in the hippocampus.
(i) Route retrospective and prospective responses/planning is shown in backtrace analysis.
Not autonomous due to external BBD.

Strösslin et al. [13]() Khepera mobile robot
platform.
() Camera 60° H FOV.
() Odometers.
() Proximity sensors.
(i) Simulates rodent’s FOV by rotating camera 4 times to obtain 240° FOV image.
(ii) Extracts directional information from visual inputs.
(iii) Path integration through visual and self-motion information.
(i) Place cells.
(ii) HD cells.
(iii) Action cells,
located in dentate
gyrus.
(iv) Many neurophysiologically based elements.
Combined place code (CPC) neurons, where visual and odometric information is stored.Biologically inspired reinforcement learning mechanism in continuous state space.
Not autonomous due to use of external PC.

Hafner [14]() Omnidirectional camera.
() Compass.
(i) 360° snapshot divided into 16 segments. Input into place cell NN, thus assisting with robot’s position determination.(i) Place cells.Topological map,
relational nav.
connections between place cells.
Can only be performed in simulations
due to amount of metric data processing required.
Not autonomous.

Barrera and Weitzenfeld [2, 22]() Sony AIBO, 4-legged robot.
() Camera 50° H.
() Limited turns in increments of ±45°.
() External PC w/1.8 GHz
Pentium 4 Processor. Runs
nav. model and connects
wirelessly to AIBO robot.
(i) Simple color recognition representing landmarks and goal.
(ii) Distance extracted from images of engineered environment and known relations.
(i) Places cells & many neurophysiologically based elements.Place cells (nodes) and connections (edges).
Simple T-maze and 8-arm maze.
Ability to learn and unlearn goal locations.
Not autonomous due to use of external PC.

Wyeth and Milford [19, 20]() Pioneer 2-DXE robot.
() Motor encoders,
odometry.
() Sonar & laser range
finder, collision avoidance
& pathway centering.
() Panoramic camera syst.,
landmark recognition.
(i) 360° snapshot. Each unique snapshot is stored as a local view cell (VC) for landmark recognition.(i) Place cell & head direction cell combined as a pose cell.A cognitive map is stored in an Experience Map. The map is created from the pose cells in the competitive attractor network (CAN).Office delivery locations are stored in the mobile robot, which uses the Exp.
Map and CAN to make deliveries.
Autonomous.

Cuperlier et al. [24](1) Robot with 3x Dual Core Pentium Processors (3 GHz each).
() Panoramic camera.
() Compass to measure azimuth angles.
() Wheel encoders.
(i) 360° snapshot taken at low resolution and image is convolved using difference of Gaussian (DoG) to detect characteristic points (landmark recognition).(i) Place cells coupled together to create transition cells.Topological map,
created online during initial exploration phase: images and directions used to create place cells which are then used to create trans. cells.
Use of the Bellman-
Ford algorithm to choose most direct route from the cognitive map (transition cells with weighted links).
Autonomous.