Table of Contents Author Guidelines Submit a Manuscript
Applied Bionics and Biomechanics
Volume 2015, Article ID 543492, 9 pages
Research Article

Navigating from a Depth Image Converted into Sound

1University Grenoble Alpes, LPNC, 38000 Grenoble, France
2CNRS, LPNC, 38000 Grenoble, France
3University Grenoble Alpes, GIPSA-Lab, 38000 Grenoble, France

Received 18 July 2014; Accepted 18 January 2015

Academic Editor: Simo Saarakkala

Copyright © 2015 Chloé Stoll et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Supplementary Material

This video is a live camera view from a participant walking the maze with the Sensory Substitution Device (SSD). MeloSee SSD converts depth in sound intensity, with stereo modulation for the horizontal axis. Height is converted in pitch (vertical axis). The left view corresponds to depth map with color encoding, from red for the closest (50 cm) to blue for the furthest distance (250 cm). Areas in black are either too far or too close to be detected by the Kinect® sensor and white areas are shade sensor artifacts. Note that black and white are not auditory encoded. The right view corresponds to RGB normal camera-view of the maze. The maze segment displayed here starts between two parallel screens, then turns 90° left after facing a wall.

  1. Supplementary Material