- About this Journal
- Abstracting and Indexing
- Aims and Scope
- Article Processing Charges
- Articles in Press
- Author Guidelines
- Bibliographic Information
- Citations to this Journal
- Contact Information
- Editorial Board
- Editorial Workflow
- Free eTOC Alerts
- Publication Ethics
- Reviewers Acknowledgment
- Submit a Manuscript
- Subscription Information
- Table of Contents
Advances in Human-Computer Interaction
Volume 2013 (2013), Article ID 163718, 6 pages
Blind Sailors’ Spatial Representation Using an On-Board Force Feedback Arm: Two Case Studies
CNRS UMR 6285 LabSTICC-IHSEV/HAAL, Telecom Bretagne, Technopôle Brest-Iroise, CS 83818, 29238 Brest Cedex 3, France
Received 13 May 2013; Accepted 18 November 2013
Academic Editor: Kerstin S. Eklundh
Copyright © 2013 Mathieu Simonnet and Eamonn Ryall. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Using a vocal, auditory, and haptic application designed for maritime navigation, blind sailors are able to set up and manage their voyages. However, investigation of the manner to present information remains a crucial issue to better understand spatial cognition and improve navigation without vision. In this study, we asked two participants to use SeaTouch on board and manage the ship headings during navigation in order to follow a predefined itinerary. Two conditions were tested. Firstly, blind sailors consulted the updated ship positions about the virtual map presented in an allocentric frame of reference (i.e., facing north). In the second case, they used the forced-feedback device in an egocentric frame of reference (i.e., facing the ship headings). Spatial performance tended to show that the egocentric condition was better for controlling the course during displacement, whereas the allocentric condition was more efficient for building mental representation and remembering it after the navigation task.
Nowadays, it is well known that blind people can take advantage of virtual navigation. Indeed, various experiments have shown interesting results in different types of environments. Jansson and Pedersen  studied a virtual map of North America’s States and showed that it was difficult to navigate with a haptic mouse (VTPlayer) which only provided participants with cutaneous feedback (two matrices of pins) in spite of the participants’ motivation. Gutierrez  used a forced-feedback device (GRAB Project) and assessed a geographical haptic representation of Madrid (Spain) by asking twelve blind people to create and depict a route between two points. Participants completed the tasks without major difficulty and stated that the application was attractive and easy to use because of the combination of forced-feedback and audio information. At the same time, Magnusson and Rassmus-Gröhn  investigated the transfer between an egocentric haptic virtual environment and a real street in a district of Lund (Sweden). Here, they asked participants to prepare and realize an itinerary from a bus stop to a music hall. Results show that blind people who were good at navigating with a cane were also good at exploring with the haptic device. Then, in the street, participants were able to complete the itinerary learnt in the virtual environment. Therefore, a transfer happened between the virtual and real worlds. In this respect, Lahav and Mioduser  compared nonvisual spatial representations obtained after having explored real and virtual classrooms. “The results were clearly indicative of the contribution of learning with the virtual environment to the participants’ anticipatory mapping of the target space and consequently to their successful performance in the real space” (p.33). Thus, blind people seemed to learn the classroom layout equally as well during real navigation as during virtual exploration.
Examining these studies, no one can deny that blind people benefit greatly from exploring virtual environments to learn about their surroundings. However, wide open spaces such as lands, mountains, or oceans have not been thoroughly studied yet. Thus, little is yet known about the benefits of virtual environments to help blind people to master the quite extreme navigation conditions that challenge spatial orientation in natural places, that is, when the choice of the displacement is not constrained by any road. Focusing on blind sailors, we found that a maritime environment could be learned as precisely using a tactile map as by using a virtual one . Moreover, we recently found that they can take advantage of virtual navigation training sessions to locate themselves in real maritime space. Results revealed that a subject located himself more accurately in a real maritime situation after a navigation simulation in a virtual environment in an egocentric frame of reference (i.e., the map moves around the ship) than in an allocentric situation (i.e., the ship moves on the map) . This seemed to happen because the participant got lost in the virtual egocentric environment and had to produce a spatial reasoning effort to find his way back . This result raises the question of the coordination of the spatial frames of reference between virtual and real navigations . In addition, recent and strong results show that sighted people using an egocentric map obtain better performances on route-following tasks, whereas they perform better on the map reconstruction tasks when they used an allocentric map .
Thus, the present investigation aims to test the influence of the spatial frames of reference display when a blind sailor is offered the opportunity to use a haptic and auditory navigation tool inside the ship during the real voyage. How can virtual and real information be combined? Is it more beneficial to display information in an egocentric frame of reference and to use a phantom forced-feedback device such as a maritime virtual cane? Or is it better to display the map in an allocentric frame of reference when referring to more stable seamarks?
In these case studies, the two participants were 29- and 47-year-old men. Participant 1 lost vision at 18 and participant 2 at 23 years old. They have both been sailing for many years and are able to use tactile paper maritime maps to set up and control their sailing trips.
SeaTouch software and hardware, aimed at providing for blind people’s cartographic needs, were used. Using haptic sensations, vocal announcements, and realistic sounds, SeaTouch allowed blind sailors to control their maritime itineraries.
Because the recent S57 vector maritime maps are free in some countries and include many geographic objects, we developed “Handinav” software. This transforms the new S57 data into XML structured files. Thus, an important number of objects can be chosen to be displayed or not: sea areas, coastlines, land areas, beacons, buoys, landmarks, and a great deal of other data are contained in these maritime XML maps. SeaTouch software builds JAVA3D maps from XML data.
2.2.2. Forced-Feedback Information
Using a Phantom Omni haptic forced-feedback device, blind sailors explored a workspace around 40 centimetres wide, 30 centimetres high, and 12 centimetres deep, with a haptic cursor. Thus, they touched different objects of maritime maps in a vertical plane in the same way as sighted people view a computer screen. The haptic display was 2D-extruded. In other words, the relief of the land was drawn using only two flat surfaces two centimeters apart. Between the land and sea areas, the coastline formed a perpendicular wall (analogous to a cliff face) that allowed users to follow it with the phantom. The coastline display used the contact haptic forced-feedback. In contrast, for waypoints, we applied a constraint haptic forced-feedback to a spring one centimeter in diameter. This spring was an active forced-feedback field that maintained the cursor inside the object with a 0.88 Newton force. In order to exit the spring, users had to apply a stronger force. The display of the boat position used the same spring but it could be navigated to, by the users, from everywhere on the workspace. To do this, they just clicked the first button of the phantom and the cursor then caught up with the position of the boat.
2.2.3. Auditory Information
In a sonification module using the forced-feedback device, as soon as the users touched virtual geographic objects with the haptic cursor, they could hear naturalistic recorded sounds relative to this object. Thus, when they touched the sea, they could hear a water sound, when they touched and followed the coastline, they could hear seabirds cry out, and when they touched the land areas, a sound of land-based birds was played. Moreover, when the users broke the sea surface, they heard the sound that a diver would make. Eventually, using “Acapela” vocal synthesis, when the phantom cursor entered a waypoint, the corresponding name was spoken.
During the task they were asked to control the ship position using the haptic and auditory navigation software interfaced with the haptic forced-feedback device inside the ship during true navigation. Here, we tested two conditions.
In the first condition, information was provided in an egocentric frame of reference; that is, the heading of the boat was up-oriented and the blind sailor could use the phantom haptic interface as a long maritime cane. The ship was not moving on the workspace but the map shifted around the sailboat.
In the second condition, cartographic information was displayed in an allocentric frame of reference; that is, conventionally the north was up and the ship moved on a static virtual map during the real voyage.
The role of the participants was entirely cartographic. They had to indicate to the crew the different heading directions they had to follow, taking into account the wind direction to reach seven consecutive waypoints named from 1 to 7 (Figure 1). After the sailing session, back to the harbor, we asked the participants to draw the waypoints layout on a tactile paper sheet.
To avoid potential learning effects and biases due to different waypoint configurations, we used the same waypoint layout that we rotated from one hundred degrees, from one condition to another. Moreover, participant 1 performed the allocentric condition first and participant 2 the egocentric condition first. This counterbalanced order was aimed at avoiding the learning effect due to repetition of the method.
2.4. Data Collection
In this study, we assessed two types of data.
Firstly, we measured the seven Euclidean distances between the waypoints and the nearest positions of the ship during the itineraries. This provided us with an indicator of the distance accuracy of the navigation control.
We, secondly, compared the layouts of waypoints drawn on the tactile paper and their real positions. To be able to compare these two configurations, we considered that the first waypoint was drawn at its right position. Then, the distances were managed by applying the scale used in the phantom workspace during navigation (1 : 1000).
3.1. The Itinerary Control
The main result showed that participants passed an average of 20 (±19) meters away from the waypoints in the egocentric condition versus 45 (±32) meters in the allocentric condition. Statistical analysis revealed that the samples did not respect a normal distribution (Lilliefors test ) but that a significant difference exists when applying the nonparametric paired test of Wilcoxon (). In other words, they controlled the course, significantly, twice more accurately in the first condition than in the second (e.g., Figures 2(a) and 2(b)).
3.2. Itinerary Representation
3.2.1. Distance Accuracy
Another interesting result was provided when participants were asked to draw the waypoint layout. Indeed, after navigating with the egocentric display, participant 1 was unable to perform the required task and to draw the waypoint layout. He could only say that the fourth waypoint was to the west of the fifth, which was actually southwest. Participant 2 could draw the itinerary layout after navigating in the egocentric condition but presented a distance precision of about 199 (±195) meters.
Conversely, after navigating with the allocentric display, both participants were able to draw configurations on the tactile paper sheets. Participant 1 drew waypoints with an average precision of about 127 (±67) meters from the actual waypoints and participant 2 illustrated the configuration with an average accuracy of 98 (±61) meters.
Obviously it was not possible to perform any statistical test on the distance accuracy for participant 1 because of the lack of representation in the egocentric condition. Focusing on participant 2 we did not find any significant difference (Wilcoxon test, ) between the distance precision of the representation after navigating in egocentric and allocentric conditions.
3.2.2. Layout Similarity
Applying bidimensional regression techniques , we found that the correlation coefficients were equal to 0.86 for participant 1 and 0.92 for participant 2 in the allocentric condition. These results indicated that the mental and actual representations were quite similar, unlike in the egocentric condition where participant 2 drew a layout whose was only equal to 0.16. In this respect, when participant 2 drew the waypoint layout after navigating in the egocentric condition, there was not much similarity between the configuration of its representation and the actual layout. In brief, the representations built in the allocentric condition seemed much more accurate than those in the egocentric condition (e.g., Figures 3(a) and 3(b)).
This study aimed at investigating whether it was more beneficial for the blind sailors to display information in an egocentric frame of reference and to use a haptic forced-feedback device as a maritime virtual cane or to display the map in an allocentric frame of reference in order to refer to more stable seamarks. Results suggest that the haptic egocentric display is more efficient for controlling a maritime itinerary whereas allocentric navigation seems to better fit the construction of a mental map of a maritime environment without vision.
4.1. Egocentric Condition
In the egocentric condition, we saw that the use of the haptic forced-feedback device inside the ship during the voyage made haptically perceptible that which was not perceptible without vision. Blind sailors employed the forced-feedback device as a long maritime cane, not to avoid obstacles, but much more to find waypoints. We noticed that this strategy was also found in the Magnusson and Rassmuss-Gröhn experiment  within the audio traffic environment. However, in the case of the present study, when map and environment were aligned, participants followed a better itinerary. Even if the global pictures of the ship trajectories and the haptic patterns of exploration did not clearly look different (Figures 2(a) and 2(b)), the distances at each waypoint (i.e., 20 m in ego versus 45 m in allo) showed better results in the egocentric condition, and the verbal reports (Table 1) provided by the participants revealed that, in the egocentric condition, it was easy to feel the difference between the ship heading (i.e., bottom-up axis of the phantom workspace) and the direction of the current waypoint. Conversely, in the allocentric condition, the participants had to follow the ship for a long time to perceive a raw heading direction. Then, they had to perform a mental rotation to deduce necessary adjustments in order to reach the next waypoint. Thus, during navigation, to follow an itinerary, this allocentric condition appeared less intuitive to the participants than the egocentric option and led to less precise navigation (i.e., 20 m in ego versus 45 m in allo).
4.2. Allocentric Condition
In the allocentric condition, participants had to perform mental rotation to adjust successive ship headings. However, although the orientation of the boat is less intuitive, the position of waypoints was stable and could be used as real seamarks to perceive the position of the boat relative to the whole itinerary layout. In this case, as we can see in Figures 3(a) and 3(b) blind sailors were better at drawing this global layout. Even if this study implies only two participants, this result appears to be strong since in allocentric condition scores are near from 1 (i.e., total similarity) (e.g., 0.86 and 0.92), whereas, in egocentric condition, scores are near to 0 (i.e., no similarity) (e.g., 0 and 0.16). This result supports the idea that stable seamarks seemed to allow participants to build true mental invariants which were useful to encode a consistent spatial representation in long term memory, as suggested by Thinus-Blanc and Gaunet . We notice that this spatial cognition process appears to be the same as that in sighted people . The similarities between representations and itineraries in the allocentric condition for both participants suggest that they encoded a global geometric shape from a “north up point of haptic view” while they did not construct any consistent form in the egocentric condition. In line with the results relative to the dependence of the point of view and of the intrinsic axis of a configuration [12, 13], we suggest that the identification of salient axes within the layout (e.g., three waypoints in line) of different stable orientations (e.g., north-south-east-west) could provide blind sailors with multiple enduring representations. A crucial issue remains about the usability of these multiple representations during navigation.
5. Conclusion and Perspectives
To conclude, these results show that egocentric and allocentric information presentations do not provide blind sailors with the same advantages. The alignment of the egocentric condition helps to extend direct perceptions and process them in working memory, while the allocentric representation leads to encoding spatial invariants in long term memory. These results are congruent with previous literature on this topic about sighted people. Indeed, Wickens  studied the influence of egocentric or allocentric display and found that the best type of maps did not exist because they are relative to the task. For example, he showed that an allocentric view led to better results in a strategic task while an egocentric view offered better results in a wayfinding task. More recently, Porathe  showed that the egocentric view was the most effective information presentation in a wayfinding task. These common outcomes reinforce the idea that blind and sighted people similarly reason about space and also suggest that blind people’s difficulties are more about accessing information without vision .
However, as only two participants have participated in this experiment, the results should be taken with caution. In other words, we cannot statistically affirm that the egocentric condition is better to navigate with better precision and that allocentric is more efficient to build a mental spatial representation. However, results allow us to build a strong hypothesis and a more detailed experimental protocol in order to find how to precisely explain the cognitive processes involved in such a situation.
As a perspective, it would be necessary to set up a new experiment with more participants and even with sighted people, to see if there is any difference between egocentric and allocentric spatial cognition, with and without sight. It would be interesting if participants could change between egocentric and allocentric displays relative to the task. We could hypothesize that a way to improve both wayfinding and representation task would be to switch between information from both displays as often as the situation requires. Moreover, potential correlations between exploration movement patterns (Figure 4) and spatial efficiency could lead to a better understanding of nonvisual spatial cognition and provide blind sailors with a pedagogic method to use such a tool.
Conflict of Interests
The authors report no conflict of interests. The authors alone are responsible for the content and writing of the paper.
The authors would like to especially acknowledge the four blind sailors who agreed to perform their experiments and the Orion association for the use of their ship.
- G. Jansson and P. Pedersen, “Obtaining geographical information from a virtual map with a haptic mouse,” in Proceedings of the 22nd International Cartographic Conference (ICC '05), A Coruna, Spain, July 2005.
- T. Gutierrez, Enhanced Network Accessibility for the Blind and Visually Impaired, Labein, Madrid, Spain, 2004.
- C. Magnusson and K. Rassmus-Gröhn, “A dynamic haptic-audio traffic environment,” in Proceedings of the 2004 EuroHaptics, Munich, Germany, June 2004.
- O. Lahav and D. Mioduser, “Haptic-feedback support for cognitive mapping of unknown spaces by people who are blind,” International Journal of Human Computer Studies, vol. 66, no. 1, pp. 23–35, 2008.
- M. Simonnet, S. Vieilledent, J. Guinard, and J. Tisseau, “Can haptic maps contribute to spatial knowledge of blind sailors?” in Proceedings of the International Conference on Enactive Interfaces (ENACTIVE '07), pp. 259–262, Grenoble, France, 2007.
- M. Simonnet, S. Vieilledent, D. R. Jacobson, and J. Tisseau, “The assessment of non visual maritime cognitive maps of a blind sailor: a case study,” Journal of Maps, vol. 2010, pp. 289–301, 2010.
- R. F. Wang and E. S. Spelke, “Human spatial representation: insights from animals,” Trends in Cognitive Sciences, vol. 6, no. 9, pp. 376–382, 2002.
- B. G. Witmer, J. H. Bailey, B. W. Knerr, and K. C. Parsons, “Virtual spaces and real world places: transfer of route knowledge,” International Journal of Human Computer Studies, vol. 45, no. 4, pp. 413–428, 1996.
- W. Rodes and L. Gugerty, “Effects of electronic map displays and individual differences in ability on navigation performance,” Human Factors, vol. 54, no. 4, pp. 589–599, 2012.
- A. Friedman and B. Kohler, “Bidimensional regression: assessing the configural similarity and accuracy of cognitive maps and other two-dimensional data sets,” Psychological Methods, vol. 8, no. 4, pp. 468–491, 2003.
- C. Thinus-Blanc and F. Gaunet, “Representation of space in blind persons: vision as a spatial sense?” Psychological Bulletin, vol. 121, no. 1, pp. 20–42, 1997.
- W. Mou, H. Zhang, and T. P. McNamara, “Novel-view scene recognition relies on identifying spatial reference directions,” Cognition, vol. 111, no. 2, pp. 175–186, 2009.
- W. Mou, Y. Fan, T. P. McNamara, and C. B. Owen, “Intrinsic frames of reference and egocentric viewpoints in scene recognition,” Cognition, vol. 106, no. 2, pp. 750–769, 2008.
- C. D. Wickens, “Human factors in vector map design: the importance of task-display dependence,” Journal of Navigation, vol. 53, no. 1, pp. 54–67, 2000.
- T. Porathe, “Measuring effective map design for route guidance: an experiment comparing electronic map display principles,” Information Design Journal, vol. 16, no. 3, pp. 190–201, 2008.
- J. F. Fletcher, “Spatial representation in blind children. 1: development compared to sighted children,” Journal of Visual Impairment and Blindness, vol. 74, no. 10, pp. 381–385, 1980.