Abstract

Even if their spatial reasoning capabilities remain quite similar to those of sighted people, blind people encounter difficulties in getting distant information from their surroundings. Thus, whole body displacements, tactile map consultations, or auditory solutions are needed to establish physical contacts with their environment. Therefore, the accuracy of nonvisual spatial representations heavily relies upon the efficiency of exploration strategies and the ability to coordinate egocentric and allocentric spatial frames of reference. This study aims to better understand the mechanisms of this coordination without vision by analyzing cartographic exploration strategies and assessing their influence on mental spatial representations. Six blind sailors were immersed within a virtual haptic and auditory maritime environment. They were required to learn the layout of the map. Their movements were recorded and we identified some exploration strategies. Then they had to estimate the directions of six particular seamarks in aligned and misaligned situations. Better accuracy and coordination were obtained when participants used the “central point of reference” strategy. Our discussion relative to the articulation between geometric enduring representations and salient transient perceptions provides implications on map reading techniques and on mobility and orientation programs for blind people.

1. Introduction

Movement plays a major role in the acquisition of environmental knowledge since it is the only way we have of interacting with the world [1]. Therefore movements performed when people explore a novel environment may influence their performance in spatial tasks. As a consequence, understanding the relationships between exploratory movement patterns and environmental knowledge remains crucial in particular for blind people who can never get direct visual information. Thus the main goal of this study is to identify efficient cartographic exploration strategies in order to propose their inclusion in teaching programs devoted to mobility and orientation of blind people facing navigation tasks.

Navigation in the physical environment consists in whole body displacements to reach a spatial goal which can be directly perceived or located beyond the immediate perceptual field. Even if certain objects can be considered as attractors or repellers that trigger guidance mechanisms of the participant through the environment [2], navigation remains possible when these particular landmarks are unavailable to the participant. In this latter case, other much more complex mechanisms involving geometrical features [3] or salient axes within the environment [4] for example are involved in orientation [5]. They play a major role in the choice of spatial information to be used and emphasize the amount of memory resources devoted to the task. Although the notion of spatial cognitive map, considered as a form of cartographic mental field [6], is still widely debated, it is used as a concept to provide a framework to better understand mental spatial processes [7].

For navigation efficiency the cognitive coordination between an egocentric system within which the self-to-objects spatial relations is systematically updated as soon as the participant moves and an allocentric one within which the participant builds a representation based on the objects-to-objects relations remains crucial [8]. This coordination depends both on the geometry of the environmental configuration and on the activity of the participant. Shelton and McNamara [9] and Mou et al. [5] showed that encoding an intrinsic reference, that is, a salient axis, within the objects configuration favors the coordination and integration of the egocentric and allocentric systems. During a disorientation episode triggering the feeling of being lost Wang and Spelke [3] and Waller and Hodgson [10] pointed out that this occurs when none of the elements of the transient egocentric point of view belongs to a more enduring allocentric representation of the environment. As proposed by Thinus-Blanc and Gaunet [11], in order to get reoriented, one has to extract spatial invariants defined as the properties of the surrounding world which remain perceptually and mentally salient in both the egocentric and allocentric systems despite the tremendous variability of the sensory inputs during the displacement of the participant. Thus becoming lost and subsequently actively working to be reoriented may constitute a valuable opportunity to facilitate the improvement of the overall spatial knowledge. Solving this problem, and especially without vision, requires the participant to find a way to explore the environment with appropriate strategies.

Given the scenario that the spatial reasoning capabilities of blind individuals remain similar those of sighted people [12], blind people encounter difficulties in obtaining distal information from their surroundings. Accurately localizing objects solely with audition remains difficult [13]. The mechanism for resolving this problem is more complex for individuals without vision due to their inability to perceive directly distal information. Even if it has been shown that nonvisual spatial representation could be built from auditory cues [14], blind people often need whole body displacements (e.g., walking) or consultations of tactile maps to establish a physical contact with the surrounding objects [15]. Therefore, the accuracy of nonvisual spatial representations heavily relies upon the efficiency of exploration strategies. Hill and Ponder [16] and Hill and Rieser [17] have documented two types of processes to explore novel spaces without vision. The investigation phase consists in looking for the salient features of the environment using predominantly two exploration strategies to facilitate the location of objects. Firstly, the “perimeter” pattern is implemented by moving around the “perimeter” of the area in a constant direction until returning to the starting point. Secondly, the “grid” pattern is a series of straight-line movements systematically crossing the area. Then, the memorization phase aims at encoding the relationship of the important objects in the layout. Encoding strategies have been studied in locomotor [18] and in manipulatory [19] tasks leading to the identification of two main patterns of movement observed in both kinds of tasks. The “cyclic” strategy consists in successive visits to all the different objects, the same object being visited at the beginning and at the end of the sequence. This strategy involves the egocentric system. Though the “back-and-forth” strategy consists in repeated movements between the same two objects and, as such, involves the allocentric system, it leads to a better learning of the objects layout. The congruence of the observed behaviors in small manipulatory and large-locomotor-scaled spaces lead us to assume that extracting spatial invariants may be grounded on high-level mental processes consisting in using numerous route-like representations in order to identify their shared landmarks, connect them and construct map-like knowledge. This theory is consistent with the nature of the “reference point” strategy identified by Tellevik [20] as a pattern of movements in which subjects related their exploration to a salient landmark. The author showed that this strategy helped participants to obtain directional and angular information to locate objects and places.

Virtual reality has emerged as a powerful and flexible tool for simulation of real environments with both ecological validity and experimental control [21]. Researchers have created specific multimodal environments and assessed their efficiency in providing more responsive and salient spatial information for blind individuals [22]. For example, Lahav and Mioduser [23] studied nonvisual exploration strategies in virtual environment (VE) by recording movement patterns produced by blind people via a haptic interface in a virtual classroom. In line with previous results obtained in real environments [19], they confirmed that VE provided blind people with reliable access to spatial knowledge. The potential gains offered by nonvisual VE are also revealed in spatialized auditory environment [24] and spatial performances obtained in VE were more accurate when participants used allocentric strategies than egocentric ones. Furthermore, Delogu et al. [25] showed that combination of haptic and sonification allows blind people to explore a virtual map and recognize it among different tactile maps and seems particularly interesting in the understanding of cognitive mechanisms allowing blind people to acquire spatial knowledge [24]. By this respect, works such as those of Brock et al. [26] investigating tactile maps exploration in a mixed real and virtual environment using a kinect device open new opportunities to systematically analyze the efficiency of different exploration strategies.

If spatial efficiency is considered as the capability to coordinate egocentric and allocentric spatial frames of reference the aim of the present study consists in assessing the accuracy of mental representations of blind sailors learning the representation of a large geographical maritime space via a haptic and auditory VE. Blind sailors are expert users of tactile maps because they use them much more often than sighted people. We expect to identify some efficient exploration strategies that could be transferred to less spatially skilled people. Thus, haptic exploration strategies are analyzed to assess their influence on spatial performances. The main goal of this research is to identify and correlate haptic strategies and spatial performances in order to detect efficient patterns of exploration. This could lead to set up new geographic learning methods in the future.

For the purpose of the present study blind sailors constitute an interesting population because they are accustomed to manipulate tactile maps. Indeed, contrary to urban environments, maritime environments contain very few predefined itineraries and do not allow blind sailors to rely on street directions to update their positions. Consequently, they need to efficiently explore maps and build their route by themselves.

By this respect, we attempt to answer the three following questions.(i) Can blind sailors be accurate when learning haptic and auditory maritime VE?(ii) Can particular kinematics features and specific strategies be identified within the exploration patterns?(iii) What are the relationships between the spatial performances and haptic exploration patterns?

2. Method

2.1. Participants

Six blind adults (38 ± 9 years) volunteered for the experiment, one woman and five men. The experiment had been approved by the local ethics committee and respected the declaration of Helsinki. Participants gave their informed consent prior to their inclusion in the study. Due to the wide variability of blindness, ranging from a visual impairment with a high degree of light perception, to complete blindness with no light perception, the six participants met the physiological definition of blindness [27, 28]. Two participants were congenitally blind and the four others lost vision later in life (Table 1). None of the participants had any kind of visual perceptions.

The participants were recruited from a blind sailing association in Brest (France). All participants are familiar with maritime maps and use their own personal computers with text-to-speech software on a daily basis.

2.2. Apparatus

In this study, we used a 40 cm wide and 30 cm high virtual map (Figure 1) comprised of a homogeneous land mass (25% of the map area), the sea and six salient landmark objects within the ocean. In maritime terms these landmarks were referred to as beacons. The map was generated by SeaTouch, a haptic and auditory JAVA application developed in the European Centre for Virtual Reality for the navigational training of blind sailors. The haptic interaction between the participant and the virtual map was provided by a Phantom Omni device (Sensable Technologies). We chose this device for its force feedback and its wide three-dimensional workspace ( 3 0 × 4 0  cm). Indeed, force feedback allows participants to feel clearly the tactile-kinesthetic information rendered when a beacon is touched. Eventually, three-dimensional workspace allows users to explore the sea area and jump over coastlines and land areas with the phantom cursor. Within the VE, the rendering of the sea was soft and sounds of waves were played when the participants touched it. The rendering of the earth was rough and extruded by one centimeter from the surface of the sea. When the haptic cursor came into contact with the land the sound of song of birds that are found inland were played. Between the land and the sea, the coastline was rendered as a vertical cliff that could be touched and followed. In this case, the sounds of sea birds were played. The salient objects, the six beacons, were generated by a spring effect, an attractor field analogous to a small magnet of 1 centimeter in diameter. When the haptic cursor contacted with them a synthetic voice announced the name of each object (Boat, Gull, Float, Penguin, Guillemot and Egret).

2.2.1. Procedure

The experimental protocol has been sequentially conducted in three different phases: training, exploration, and evaluation.

2.2.2. Training

To ensure that the participants mastered the haptic and auditory interactions with the virtual map, they trained until they were able to easily follow the coastline, move over the surface of the sea, and locate beacons with the stylus of the haptic device. The training phase ended when these abilities were verbally self reported by the participants.

2.2.3. Exploration

Before beginning any movement, the blind participants were informed that the ultimate purpose of the exploration was to obtain enough spatial knowledge to prepare for the questions phase during which the relative directions between different beacons would be estimated without any tangible or virtual map. Exploring the virtual map consisted in displacing the stylus of the haptic device within the haptic, vocal, and auditory environment, and the exploration stopped when the participant could remember the names of the six beacons and localize them on the map without confusion.

2.2.4. Questions and Data Collection

After the exploration phase the participants pointed from each beacon’s location to three others in each of the two proposed alignment situations. So, they answered eighteen questions in the so-called aligned situation and replicated eighteen new questions in the so-called misaligned situation.

In the aligned situation, we posed the following kind of questions: “You are at the Gull and you are facing the north, what is the direction of the Egret?” Here, the axes of the participant and the north were aligned. To estimate this direction, the participant was presumably evaluating it primarily from an egocentric frame of reference.

In the misaligned situation, the following kinds of question were posed: “You are at the Gull and facing the Penguin, what is the direction of the Egret?” Here, the axes of the participant and the north were different. The participant had to process a mental rotation to combine these two axes and estimate the required direction. In other words participants were forced to coordinate egocentric and allocentric directions and frames of reference by themselves.

The questions were only answered by means of the tangible pointer of a real protractor that was fixed to the table in front of participants. This protractor allowed participants to point naturally toward a particular direction. They did not feel any graduation with this tool and were able to indicate angles like they would have done with fingers. Then we read the angle values and reported the results.

Angular data was collected to the nearest degree (Figure 2).

Small circles represent beacons. Arrows represent the axes of the participant, the north and the pointer of the protractor. The direction estimations were read on the protractor.

2.3. Data Analysis

To assess spatial knowledge of the participants, estimated directions between beacons were used and the kinematics of participants’ movements were analyzed to characterize the spatial activity involved in spatial encoding processes.

Firstly, the angular response to each directional question was used to compute the unsigned angular error (AE) that is the difference, expressed in degrees between the estimated and the correct directions of the beacon. 6 (subjects) × 6 (beacons) × 3 (questions) × 2 (alignments) led to the collection of 216 analyzed AE.

Secondly, the exploration pattern, that is the spatial trajectories of the stylus of the haptic device were analyzed as follows. The trajectories were considered as a whole from the beginning to the very end of the movement. Within these entire sequences, we measured the elapsed duration time (ED), the travelled distance (TD), and the parallelism index (PI). PI indicates how parallel is a given direction of the cursor’s single movement in comparison to the direction of its previous one [29]. Remaining totally independent from the TD, PI was calculated as the average cosine between the current and previous directions of the movement. Thus, PI is potentially comprised between −1 and 1, from back and forth movements to strictly straight movements performed in the same direction with intermediate values obtained for variously pronounced zigzag movements. Following the proposal of Hill and Ponder [16] and Hill and Rieser [17], two phases were identified within the exploratory time, investigation and memorization. For each of these phases, we computed the same variables (ED, TD, and PI). The investigation phase was characterized by the participant discovering the environment and consequently this phase ended when the haptic cursor contacted each beacon at least once. Then, during the memorization phase, the participant displaced the haptic cursor between the different beacons in order to encode their position in memory. As mentioned above, the memorization phase ended when the participant said that he could localize the six beacons without confusion.

In addition, we also attempted to characterize the exploration strategy, that is, the spatiotemporal order in which blind sailors explored and touched the six different beacons in the maritime VE. We wanted to determine if some of these patterns of movement may be more effective than others to gain an efficient spatial knowledge and if they were the same for all the participants or specific to each individual.

3. Results

AE, ED, and TD did not follow a normal distribution (Lilliefors test, 𝑃 > . 0 5 ). Thus, statistical paired comparisons were performed on both alignment situations (aligned and misaligned) by means of the nonparametric Wilcoxon test. Conversely, since PI followed a normal distribution (Lilliefors test, 𝑃 < . 0 5 ) we used a nonpaired Student’s t-test.

3.1. Responses to the Questions

For the entire set of responses, AE mean was equal to 21.6 (SD 21.3) deg with values ranging from 0.2 (for best responses) deg to 89.2 deg (for worst responses). This distribution was characterized by an equal number of responses on both sides of a threshold value of 15.2 deg. All the participants self-reported that they encountered more difficulties in answering the questions in the misaligned than in the aligned situation and commented that these additional problems stemmed from the necessity to mentally rotate the map they had memorized in order to update their orientation before pointing with the protractor.

Evidence of these difficulties is quantitatively clear. Mean AE was equal to 14.7 (SD 14.0) deg with values spread from 0.2 deg to 87.5 deg in the aligned situation, whereas mean AE was equal to 28.4 (SD 24.9) deg with values spread from 0.4 deg to 89.2 deg in misaligned situation. These values not only indicate that the mean value almost doubled when the axes of the participant and the north were different (14.7 deg to 28.4) but we also noticed specific data distribution in each situation (Figure 3). Half of the data remained below 12.0 deg in aligned situation and 95% of the direction estimates did not exceed 35 deg. In nonaligned situation half of the responses remained below 21.7 deg 95% of responses were within AE values below 80 deg. In summary, the distribution of the responses was more homogenous across the different clusters in this later case with a better balance between accurate and inaccurate responses, but overall this last result reinforces the idea that the accuracy of the participants responses tended to decrease in misaligned situation compared to aligned one. The observed AE differences between the two situations were significant (Wilcoxon test, 𝑧 = 4 . 9 5 , 𝑃 < . 0 0 0 1 ) and confirmed the existence of an alignment effect [30]. This researcher asked participants to learn a building configuration from maps and demonstrated that performances were better when the orientation of the map matched the participant orientation to the view of the building.

This effect could potentially originate from specific biases when answering questions concerning the different beacons to be pointed or from individual answering strategies for each participant. Indeed, even if we did not notice any AE difference between the beacons either in aligned (Wilcoxon test, 𝑧 ranging from 0.21 to 1.7, 𝑃 > . 0 5 ) nor in misaligned situation (Wilcoxon test, 𝑧 ranging from 0.10 to 0.87, 𝑃 > . 0 5 ) with mean values spread from 12.2 (SD 18.8) deg to 18.1 (SD 19.1) deg and from 21.8 (SD. 23.3) deg to 35.5 (SD 23.3) deg, respectively, we observed that AE tended to augment in misaligned situation for four of the six beacons (Wilcoxon test, 𝑧 ranging from 2.39 to 2.89, 𝑃 < . 0 5 ).

At the individual level, participants performed differently from each other according to the alignment situation (Table 2). When confronted with aligned situations, three subgroups of participants emerged from the responses to the questions. Participants 1, 2, and 3 whose results were similar (Wilcoxon test, 𝑧 ranging from 0.12 to 0.54, 𝑃 > . 0 5 ) made up the first subgroup and their results were significantly better (Wilcoxon test, 𝑧 ranging from 2.11 to 3.29, 𝑃 < . 0 5 ) than those obtained by participants 4 and 5 (Wilcoxon test, 𝑧 = 0 . 3 5 , 𝑃 > . 0 5 ) who made up the second subgroup. Participant 6 remained alone in the third subgroup with intermediate responses that were not significantly different from those of the first two subgroups. Although the pattern of individual responses appeared to be more complex when the participants were confronted with misaligned situations, two main subgroups could be distinguished with participants 2 and 3 obtaining similar (Wilcoxon test, 𝑧 = 0 . 3 7 , 𝑃 > . 0 5 ) and better (Wilcoxon test, 𝑧 ranging from 2.85 to 3.64, 𝑃 < . 0 5 ) responses than those, also similar (Wilcoxon test, 𝑧 = 0 . 6 1 , 𝑃 > . 0 5 ) obtained by participants 4 and 5.

In addition, the observed differences between both alignment situations were not identically organized for each participant (details are shown in Table 2) since the comparison of AE obtained in aligned and misaligned situations did not reach a significant level for three participants (P2, P3, and P6), whereas significantly better performances occurred for the three others (P1, P4, and P5) in aligned situation.

3.2. Movements of the Haptic Cursor during Exploration

Qualitatively the movements of the haptic cursor had different shapes depending on the participant but also on the experimental phase (Tables 3 and 4).

A large elliptical shape (the extent of the physical workspace of the haptic device) has not been systematically followed by all the participants (e.g., P4) even if some of them displaced the cursor along at least one subpart of this border either during the investigation phase (e.g., P3 and P5) or, more rarely, also during the memorization phase (e.g., P6).

In addition to the physical limits of the workspace, the participants had to determine the position of the coastline in order to clearly identify a functional area within which the six beacons were located. The coastline was carefully followed by participants 1, 3, and 5 during the investigation phase and, overall, very few movements went above the land portion of the virtual map. These movements performed along the maritime borders of the virtual space or along the virtual coastline allowed the participants to calibrate the amplitude of their arm movements in the actual space and to match those with the displacements of the virtual cursor.

Since the participants also had to discover and memorize the positions of the six beacons, the cursor movements did not only consist in following the maritime or terrestrial edges of the virtual space. Covering central part of the virtual sea, these trajectories have three main characteristics. Firstly, for each participant, the movements performed during the memorization phase could not be considered as the reproduction of those performed during the investigation phase indicating that participants estimated by themselves that touching the entire set of beacons only once during the investigation phase was not enough and that they needed additional experiences of sensorimotor interactions with the environment to improve their spatial knowledge. Secondly, as expected, the spatial density of trajectories was not identical between participants. Whereas some of them (P1 and P2) briefly swept the virtual sea letting large unexplored portions either during the investigation phase (P1) or during the memorization one (P2), others preferred to systematically displace their cursor until almost all the portions of space were explored (P3 and P6 during investigation, P4 and P6 during memorization). Thirdly, differences in the way participants reached the beacons also appeared between the two exploration phases. With the knowledge that each beacon was touched at least once during the investigation phase, it remains difficult to identify specific searching patterns of movement during the investigation phase since the cursor often stayed far away from the beacons (P1, P2, P3, and P6). Reaching patterns remained difficult to identify for some participants during the memorization phase (P1 and P6), but they seemed to be very well organized for others who established systematic links between stabilized series of beacons (P2, P3, P4, and P5).

Quantitatively at the global level, that is considering the entire exploration phase which lasted 573.7 (SD 281.6) sec, TD was equal to 315.8 (SD 179.5) km after conversion of the cursor displacements to the map scale. These values correspond to movements performed at a mean velocity of 0.564 (SD 0.157) km per sec (still expressed in the map scale), but large differences could be observed between participants concerning ED ranging from 254 to 989 sec for participants 1 and 6, respectively, TD ranging from 131.5 to 629.4 km for the same participants and velocity ranging from 0.293 to 0.721 km per sec for participants 4 and 2.

PI values (0.532 SD 0.686) indicate that, while travelling across the map, participants mainly produced curved trajectories. Indeed the average deviation from the straight line computed over three consecutive samples was about 58 deg despite some differences between participants (from 48 deg to 65 deg for P4 and P2). Finally, their movements allowed participants to touch about a hundred of beacons (102.8, SD 42.8) even if P1 touched four times less beacons (46) than P5 (179).

At the phase level, that is, considering investigation and memorization separately, some differences also appeared (Table 5). Indeed, the number of touched beacons was always lower during the investigation phase than during the memorization one (Wilcoxon test, 𝑧 = 2 . 2 0 , 𝑃 < . 0 5 ) and this was accompanied by differences for TD (Wilcoxon test, 𝑧 = 1 . 9 9 , 𝑃 < . 0 5 ) and ED (Wilcoxon test, 𝑧 = 2 . 2 0 , 𝑃 < . 0 5 ). Nevertheless, since the ratios between investigation (about 1/3) and memorization (about 2/3) were simultaneously maintained for TD and ED, movement velocity remained unchanged (Wilcoxon test, 𝑧 = 1 . 5 7 , 𝑃 > . 0 5 ) during both phases despite differences in the curvature of the trajectories reflected by PI values ( 𝑡 = 2 5 . 6 7 , 𝑃 < . 0 0 0 1 ). Those were higher during the investigation phase (0.650 SD 0.611) corresponding to straighter trajectories than during the memorization phase (0.405 SD 0.736) within which more pronounced curves were observed. Moreover, this result was confirmed for each participant ( 𝑡 ranging from 7.96 to 15.53, 𝑃 < . 0 0 0 1 ).

3.3. Strategies for Reaching Beacons

Despite their apparent complexity, we hypothesized that the movements of the haptic cursor were not randomly distributed and that, in particular, the sequences of contacts with the different beacons obeyed some specific rules reflecting exploration strategies. In this section, we aim at identifying five of these strategies.

Three of them were quantitatively assessed by means of appropriate algorithms.(i) The “back-and-forth” strategy [18, 19], as mentioned earlier, consists in repeated movements between the same two beacons (e.g., beacon A-beacon B-beacon A).(ii) Although the “cyclic” strategy [18, 19] consists in successive visits of all the different objects, the same object being visited at the beginning and at the end of the sequence, we also took into account the successive visits of three, four, or five beacons before touching the first one again.(iii) The “point of reference” strategy has been depicted [20] as a set of “back-and-forth” patterns converging to the same element. It corresponds to sequences during which the same beacon was systematically touched after each contact with the other ones leading to star-shaped patterns. Owing to the fact that six beacons were displayed in the VE, we could potentially observe stars with five branches at most (e.g., beacons A-B-A-C-A-D-A-E-A-F-A), but we also took into account stars with 4, 3, and 2 branches only. These latter were named “V-shapes”.

The two remaining strategies were assessed by means of visual inspection of the displacements of the haptic cursor.(i) The “perimeter” strategy [16] corresponds to displacements along the physical limits of the virtual workspace. These limits were determined by the mechanical properties of the haptic device to be manipulated by the participants. In our case, they offered an elliptical shape.(ii) The “grid” strategy [16] consists in repeated displacements of the cursor along straight parallel lines followed by displacements still along straight parallel lines, the second series of displacements being perpendicular to the first one.

At the global level, that is, considering all kinds of strategies mentioned above, we distinguished 117 individual strategies. Our analysis clearly revealed that whereas 37% of the identified exploration strategies occurred during the investigation phase, the remaining 63% occurred during the memorization phase (Figure 4). Nevertheless, all of them were not evenly distributed within each of these two phases. Indeed, with 39% of the identified sequences consisting in repeated movements between the same two beacons, the “back-and-forth” exploration pattern was widely involved in the activity of the 6 participants (Figure 4) but with different proportions (12% and 27% for the investigation and memorization phases, resp.). In a complementary way, we observed that the “point of reference” pattern represented 25% of the total number of strategies with only 7% appearing during the investigation phase and 18% taking place during the memorization phase. The “cyclic” strategy appeared less often than the two previous strategies, but it still represented 18% of the total number with 5% during the investigation phase and 13% during the memorization one. Taken together, these three first results revealed that the “back-and-forth”, “point of reference”, and “cyclic” patterns were used almost twice as frequently during memorization phase than during investigation phase. Conversely, the “perimeter” (8% of the total number) and the “grid” (10%) strategies were used more often during the investigation phase (6% and 7% of the total number, resp.) than during the memorization one (2% and 3% of the total number, resp.).

Individually, participants presented different sequences of exploration strategies when learning the configuration (Figures 5 and 6). Indeed, it appears that P1 was expected to learn the beacons layout by means of short “cyclic” strategies whereas P2 tried to memorize the beacons configuration by means of short “point of reference” strategies and long “cyclic” strategies and P3 combined a lot of short and long “point of reference” strategies. P4 and P5 clearly focused on long “cyclic” strategies to encode the beacons configuration. Finally, P6 mainly used “back-and-forth” and (less often) “cyclic” patterns even if he continued to employ “perimeter” and “grid” strategies during the investigation phase.

In summary, two main characteristics emerge from these analysis performed at the individual level. On the one hand, it appears that even if the different versions of the “cyclic” strategy were not much used during the investigation phase, they were systematically employed by each participant during the memorization phase. However, only participants 2, 4, and 5 performed six points’ cycle. On the other hand, results revealed that each participant except the first one used at least one version of the “point of reference” strategies. Finally, only the participants 2 and 3 used the “point of reference” strategies with four and five branches, that is, on almost the whole configuration.

4. Discussion

In this experiment, we immersed six blind participants in a haptic and auditory maritime VE and asked them to learn the spatial location of a set of six beacons. Then, without reference to the VE the participants had to answer two series of questions. In the first series the axes of the participant was north oriented with respect to the map (aligned situation) whereas, in the second series, the north and the participant axes were always different (misaligned situation). In this latter case, participants were forced to mentally rotate their own position within the map in order to coordinate egocentric and allocentric frames of reference.

In the perspective of map reading techniques improvement for blind people, the aims of our study were to assess how accurate blind sailors could be when they were constrained to coordinate both spatial frames of reference by themselves as it is the case when they have to read a map, determine their current location, and plan displacements. Understanding the cognitive processes involved in reading a map necessitated to analyze their spatial performances (AE), the kinematics of their haptic exploration patterns, and the relationships between both of them. These three points are used to explain the results of the present experimentation.

4.1. Can Blind Sailors Be Accurate When Learning Haptic and Auditory Maritime VE?

Even if we cannot exclude that the results obtained by Warren [30] may be task dependant, 15 degrees appear to be the minimal threshold to distinguish accurate estimated directions from inaccurate ones. Following this information and looking at the AE, we found that three of our participants were accurate in aligned situation and two of these three (P2 and P3) were also accurate in misaligned situation. Moreover, they did not present any significant difference between AE in aligned and misaligned situations. This leads to the inference that they did not encounter major difficulties in coordinating egocentric and allocentric spatial frames of reference. Fulfilling these two criteria (i.e., accuracy and coordination) appears to be the key condition for navigation efficiency [31] since only two of our participants met them. Neither accuracy nor coordination was obtained by P4 and P5, whereas P1 was only accurate in aligned situation indicating that he could not coordinate egocentric and allocentric frames of reference. Otherwise, despite the lack of difference between aligned and misaligned results, one can not consider that P6 coordinated both frames of reference since his accuracy level remained mediocre.

Thus, our results suggest that only P2 and P3 could perceive the salient features of the layout, encode relevant landmarks in long-term memory, and recall appropriate information in working memory in order to master spatial tasks and facilitate future navigation. Here, the coordination of egocentric and allocentric spatial frames of reference requires the ability to use a mental representation remaining independent of the individual orientation [4] but provides the participant with a more or less distorted geometric shape of the whole configuration [32]. Supporting the findings of Thinus-Blanc and Gaunet [11], this coordination mechanism implies to extract psychological invariants which are the connections between well-known schemata considered as typical geometric shapes elaborated from an allocentric frame of reference and a shape extracted from the environment encoded in an egocentric point of view (or haptic view).

Since movements performed during the exploration phase are the only way for participants to interact with VE and gain spatial knowledge, we wondered if the characteristics of the exploration patterns could explain how efficient invariants were extracted.

4.2. Can Particular Kinematics Features and Specific Strategies Be Identified within the Exploration Patterns?

The fact that every participant spent one-third of the time and traveled distance during the investigation phase, and the remaining two-thirds during the memorization phase without modifying their average velocity leads us to think that participants produced twice effort to encode beacons positions than to discover them. It is therefore likely that, at least in this experiment, time and distance employed during the investigation phase constitutes valuable predicators of the amount of resources needed to achieve the memorization phase. This “one-third–two-third ratio” also appeared in the number of exploration strategies we identified whereas, only a quarter of the total beacons was touched during the investigation phase and three quarters during the memorization phase. This shows that the frequency of touched beacons increased during the memorization phase. Conversely, the index of parallelism decreased during this phase. Taken together these latter results indicate that during the memorization phase strategies were longer in terms of touched beacons and mostly consisted in abrupt direction changes as soon as a beacon was touched in order to reach another one. Doing so participants elaborated specific polygons whose vertices were the beacons and which could be assimilated to already known geometric shapes and thus favor the extraction of spatial invariants [11].

All participants used typical exploration patterns during the investigation phase (“back-and-forth”, “perimeter”, “grid”). During the memorization phase some of them mainly used the “reference point” strategy, whereas some others rather used the “cyclic” one. This leads to the idea that they built different mental geometric shapes probably encoded in distinct spatial frame of reference. According to previous works [18, 20, 33], the “reference point” strategy implies the allocentric spatial frame of reference, whilst the “cyclic” strategy rather involves the egocentric one. Indeed, Klatzky [34] proposed that an “object-centered representation” is necessary to perform efficient mental rotations (misaligned situation), whereas a “body centered representation” allows individuals to carry out mental translations (aligned situation). Looking at our results, one could suggest that only P2 and P3 were able to efficiently rotate and translate their mental beacons configuration because they obtained equivalent angular errors in both situations. This finding raises the question of whether specific exploration strategies could improve spatial performance.

4.3. What Are the Relationships between the Spatial Performances and Haptic Exploration Patterns?

Participants 1 and 6 mainly used sequential “back-and-forth” patterns and thus probably only stored multiple discontinuous pieces of the layout. Doing so, they could encounter difficulties to connect them in a coherent and global manner. Conversely, the four other participants used long "object-to-object" strategies containing contacts with every beacon (“point of reference” and “cyclic”) and could rapidly construct a complete geometric representation of the beacons configuration. But, among them, one can wonder why only P2 and P3 maintained a high level of accuracy and could still coordinate both spatial frames of reference.

Focusing on their exploration patterns, it appears that P2 and P3 are the only ones who used long “point of reference” strategies. They produced star-shaped patterns with four and five branches, respectively, whereas the other participants never exceeded two branches (V-shape). The case of P2 is particularly interesting since he has been the only participant who combined a star-shaped strategy with four branches with two “cyclic” patterns containing all the beacons of the configuration and one could pose the question of the role played by each of those strategies. Looking at other participants, we can observe that four complete “cyclic” without “point of reference” strategies led to poor performances (P4), whereas two full “point of reference” without any “cyclic” strategies were strongly efficient (P3).

Several reasons may explain the poor performances elicited by the “cyclic” strategy. The series of beacons to be touched are reached in a given order that can be referred as unidirectional. Consequently, inferring directions between beacons can potentially require the participant to mentally follow the course in the same direction and provoke the accumulation of angular errors when turning each stored beacon. This mechanism can be compared to the well-known path integration process used by blind and sighted humans to displace their whole body in the absence of external cues [35].

Conversely, many reasons explain the advantages of the “point of reference” strategy. Using this strategy, one takes care to establish direct bidirectional connections between a stabilized beacon in the center of the layout and each of the other beacons. In such a case, we propose to name this pattern the “central point of reference” strategy which balances the whole configuration in terms of angles and distances around the most salient landmark. In other words, participant builds a mental star shape composed of many incomplete triangles that share the same vertex (central point) and can have a common edge. This network facilitates the mental completion of triangles [34] and thus allows participants to reduce the number of inferences needed to deduce shortcuts between two nonpreviously connected beacons. Moreover, from a path integration perspective, the efficiency of the process is also enhanced since the amount of cumulated angular errors is systematically reset each time the participant touches the central point.

Referring to previous findings [11], this particular beacon constitutes an invariant which favors the cognitive coordination between egocentric and allocentric spatial frames of reference. Indeed, the “central point of reference” strategy combines two well-known strategies already identified in the locomotor domain: the allocentric “reference point” strategy depicted by Tellevik [20] as a set of “back-and-forth” patterns converging toward the same element and the egocentric “home-base-to-objects” strategy [16, 17] which is a set of “back-and-forth” patterns between the initial position of the participant and the position of other elements.

5. Conclusion

Given that allocentric representation is encoded in long-term memory and that egocentric system is required to interact within the environment [36], we suggest that when using the “central point of reference” strategy, participants memorized the star-shaped geometric schemata in an enduring representation and mentally projected their whole body in the center of the configuration to link this representation with imagined egocentric perceptions. Being immersed within the VE could certainly facilitate the articulation between top-down processes which organize spatial knowledge and bottom-up mechanisms which extract salient sensory information in order to construct a single functional representation that is allowing to efficiently manage spatial tasks. This suggests that using the complete version of the “point of reference” exploration strategy remains a powerful way to learn a beacon configuration in a VE. However, it raises the question whether explicit instructions to use the “central point of reference” pattern could provide participants with a solution to accurately combine egocentric and allocentric spatial frames of reference. If it was the case, new perspectives could be proposed in learning methods and programs devoted to help blind people and the organization of their spatial knowledge.

Such an approach would deserve new experimentations to know whether exploration strategies are the cause or the consequence of a particular level of spatial skills. One could think that both play a role in a circular process within which performing new exploration strategies could improve the spatial skill level but also within which performing a particular strategy might be impossible unless a specific skill level is reached. This is can be considered as a hypothesis for future researches addressing the question of the influence of the spatial layouts. Indeed, even if our study showed that the central point of reference strategy appears to be the more efficient, when conceiving virtual environments devoted to human learning, it remains important to determine which parameters of the layouts are the best levers to improve spatial knowledge. Nevertheless, this could lead to find important individual differences, and one cannot exclude that certain exploration strategies could be better for some blind participants than for others depending on the way they have built their spatial mental representations.

Eventually, we emphasize that our results concern complete blind people. We notice that congenitally blind (participants 5 and 6) do not use the most efficient strategy and do not obtain the best results (participants 2 and 3). Even if all participants were used to manipulate tactile geographic maps and thus our results concern experts rather than beginners, our sample remains probably too small to propose a definitive conclusion about visual experience. Moreover, the strategy of the central point of reference is efficient when participants use our “single finger system” on a configuration with 6 elements in a 30 cm × 40 cm workspace, but it has not been validated in other conditions or after a long lasting training program. For all these reasons, we remain very cautious about the extrapolation of our results until other complementary modalities have been tested.

Conflict of Interests

The authors report no conflict of interests. The authors alone are responsible for the content and writing of the paper.

Acknowledgments

The authors would like to express their gratitude to all those who gave us the possibility to complete this study. The authors want to thank the blind sailors of Orion association which accepted to perform experiments but also CECIAA society for funds and the master graduate students in computer science in the European Center for Virtual Reality (http://www.cerv.fr/) for helping with the implementation of SeaTouch.