- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Advances in Human-Computer Interaction
Volume 2013 (2013), Article ID 740324, 15 pages
Virtual Sectioning and Haptic Exploration of Volumetric Shapes in the Absence of Visual Feedback
School of Information Sciences, University of Tampere, Kanslerinrinne 1, PINNI B, 33014 Tampere, Finland
Received 20 March 2013; Revised 27 May 2013; Accepted 3 June 2013
Academic Editor: Kerstin S. Eklundh
Copyright © 2013 Tatiana V. Evreinova et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The reduced behavior for exploration of volumetric data based on the virtual sectioning concept was compared with the free scanning at the use of the StickGrip linkage-free haptic device. Profiles of the virtual surface were simulated through the penholder displacements in relation to the pen tip of the stylus. One or two geometric shapes (cylinder, trapezoidal prism, ball, and torus) or their halves and the ripple surface were explored in the absence of visual feedback. In the free scanning, the person physically moved the stylus. In the parallel scanning, cross-sectional profiles were generated automatically starting from the location indicated by the stylus. Analysis of the performance of 18 subjects demonstrated that the new haptic visualization and exploration technique allowed to create accurate mental images, to recognize and identify virtual shapes. The mean number of errors was about 2.5% in the free scanning mode and 1.9% and 1.5% in the parallel scanning mode at the playback velocity of 28 mm/s and 42 mm/s, respectively. All participants agreed that the haptic visualization of the 3D virtual surface presented as the cross-sectional slices of the workspace was robust and easy to use. The method was developed for visualization of spatially distributed data collected by sensors.
Even in the absence of direct contact and visual feedback, people have to explore physical properties such as friction and roughness, compliance and stiffness of environment (in geophysics and monitoring), and materials (nondestructive testing). Complementing visual information, the existing haptic shape-rendering algorithms focus on rendering the interaction between the tip of a haptic probe and the virtual surface. Using haptic interface and analyzing the effects of different types of the force feedback, the operator of the hand-held detector can feel the change of roughness, rigidity, and other physical properties of a contact. However, human perception of spatially distributed data, for example, the surface topography with varying stiffness, relying on single-point-based exploration techniques often fails to provide a realistic feeling of the complex topological 3D surfaces and intersections [1–7]. Although manual palpation can be very effective, free scanning with a single-point inspection is unnatural and significantly increases the cognitive load to establish the right relations between successive samples of sensory information separated in space and time . Haptic recognition and identification of spatial objects, their unique shape, and location do not always lead to the correct decision [9, 10]. Therefore, there is a great challenge to develop the new techniques for tangible exploration and interaction with virtual objects  in order to facilitate interpretation of spatially distributed data obtained, for example, by the hand-held detector .
The main difference between interaction with physical objects using the fingers and using the rigid probe with virtual objects is that the natural manipulations occur with multiple areas of objects and fingertips and rely on multiple sources of haptic information. This being so, the important components of the surface exploration are the kinesthetic sense of distance to the surface of interaction  and self-perception of the finger joint-angle positions . Competitive afferent flows allow the person to immediately sense the relative differences between adjacent locations by sharpening the curvature gradient due to the lateral inhibition phenomenon [14, 15].
The question of how to efficiently explore complex volumetric surfaces by relying on the haptic sense remains open. The key issue to be solved is an accessible 3D frame of reference and means of displaying the specific exploratory patterns as a sequence of haptic probes . What would happen when haptic visualization would be more complicated than just point-based local characteristics (physical parameter or perceptual quality) of the region under the cursor? Could the temporal structuring of sequentially presented spatial data facilitate their mental integration? Could exploratory movements across the volumetric surface, being kept in sync, help in haptic signals integration?
This paper begins with a discussion of related work. Then we present the method and design principles of the experimental setup and the results of comparative study of the two approaches for the presentation and exploration of the volumetric shapes in the absence of visual feedback. Finally, we summarize our results and draw conclusions. Some references are related to studies carried out with blind people. However, we had to note that the technique in the question was developed for engineers and technicians (sighted people) for alternative visualization of the surface topography and spatially distributed data collected by sensors [11, 17–20]. Thus, this is not the case for blind and visually impaired people.
Depending upon tactile experience and topological imagination, even being explored with one or two hands, raised line drawings may be difficult for blindfolded-sighted observers to integrate nonvisual information to identify the overall pattern . In some cases, an exploration of the swell paper images placed on the graphical tablet and augmented with audio feedback has facilitated the representation of the topology of a graphical environment , although the sonification of the graphs does not work flawlessly [22, 23].
In the experiments with elementary and composite planar geometric shapes , haptic and auditory modalities were combined to improve the mental representation of topological relations in the absence of visual feedback. However, the authors concluded that the constraints of single-point exploration techniques do not permit the presentation of certain basic concepts such as parallelism (of lines and planes) and intersection of 3D shapes in a simple and intuitive way.
Usually, when asked to identify the virtual object, subjects rely on sensory cues and previous experience gained through observation and manipulation involving a physical contact with an object embedded in specific contextual settings [25, 26]. To create the true mental image of a geometric shape, an observer has to collect any accessible information about the features of the object such as the local irregularities of the surface (edges, vertices, convex and concave features, and flatness), and then integrate tactile, proprioceptive and kinesthetic cues in a specific way [17, 27–32]. However, in the absence of visual feedback, identification of the objects having different levels of complexity (number and shape of elements and their symmetry and periodicity) is greatly affected by the conditions of the presentation and exploration techniques [33, 34]. In particular, objects having smooth curved boundaries are more difficult to distinguish than polygons. This may lead to misinterpretation of rounded 3D shapes [21, 35].
A systematic exploration of successive locations creates a sequence of sensations from which the person hypothesizes and the imagination retrieves a virtual profile. This helps to identify the surface, that is, to recognize and classify the contact area as, for example, curved outward (convex), curved inward (concave), or flat . Many attempts have been made to specify generic types of surface discontinuity [37–40]. However, to recognize the surface discontinuity, the person should not analyze the absolute parameters of the contacts in different locations but their relative position, that is, local irregularities such as shifts and displacements regarding the common reference point or the reference surface, or the relative finger displacements [11, 13, 36].
Some textural features of the virtual surface can be simulated using pseudographic tactile cells to display a small area of the surface around the pointer where visible irregularities can be transformed into the pattern of raised pins. With the appearance on the market of refreshable braille cells, for example, Metec AG , the module functionality was extensively tested by being physically connected to different input devices such as a stylus, mouse, and joystick . An interaction with geometric shapes was also the subject of evaluating functionalities of such a reduced display area. It is interesting that the subjects preferred visualization techniques preventing the redundancy of information about the local details yielding a better presentation of the overall indicative features and trends [43–45]. Another approach to explore virtual images consisted of creating some kind of the haptic profilometer (or surface profiler), for instance, a two-axis H frame (absolute) positioning system with braille cells mounted on a carriage able to move along guiding rails . With such a haptic display in the absence of visual and auditory feedbacks, blindfolded (sighted) persons were able to recognize the features and to identify polygonal tactile shapes from the list of the objects given.
It is important to note that the flat surface of interaction determines not only the sensory-motor coordination and strategy of exploration behavior adequate for the given task but also the way of mental processing (componential analysis, feature extraction, and classification) and reconstruction of the entire image or pieces of the image from the perceptual data collected. Moreover, an exploration strategy acts as a perceptual filter and mechanism of signal compression of sensory information. Depending on the velocity of scanning and the perceptual threshold, a variation of the probe positions is perceived sparsely but effectively allowing the user to differentiate the gradient of the surface, global and local irregularities. It is noteworthy that identifying small virtual objects, even augmented with static and dynamic friction, is a more difficult task than the recognition of physical models examined in a natural way by palpation.
To improve haptic simulation techniques, the researchers compared the accuracy of identifying virtual 3D objects and their physical models in the absence of visual feedback. In the absence of visual feedback, it is hard to imagine a proper frame of reference which would be accessible at any point in the interaction with different components of the virtual 3D object. Therefore, 3D shape recognition and identification demand much more cognitive resources than a perceptual analysis of flat graphs.
In earlier studies in the exploration of virtual objects, the experimenters used the PHANToM haptic device, and, later on, they used the PHANTOM Omni, Omega.3, Novint Falcon and other linkage-based force-feedback devices. However, by making an inspection with a rigid probe, the person could still make contact with a single point of the objects examined [9, 47]. Consequently, by analyzing the profile of the shapes through displacements of the tip of the rigid probe in order to be able to correlate all information collected, an observer has to choose the frame of reference and an optimal scanning strategy to discover the features of the curved surfaces . When no common frame of reference is available, the person can explore a virtual 3D object piece-by-piece using occasional sources of reference such as easily detected landmarks (edges, vertices, and faces) or even the skin surface of the subdominant (opposite) hand.
The results reported by Stamm with coworkers  demonstrated the limits and problems of interpreting shapes and their components in haptic exploration of the corners and edges, the shape orientation and posture. Kyung with coworkers  showed that human performance during haptic inspection of geometric polygons using the grabbing force-feedback mouse was significantly better than with the point-based force-feedback interaction technique such as the PHANToM haptic device. Jansson and Larsson  investigated prominent features of synthetic human heads with the use of the PHANToM device. This research showed that increasing the amount of haptic information needed to recognize and identify virtual 3D objects soon overloads the ability of the perceptual system. The authors concluded that there are three possible solutions to display complex virtual objects and scenes in the absence of visual feedback: training the users, simplifying the information being communicated to the user, and developing more efficient haptic interaction techniques and devices.
However, haptic information for object recognition consists of not only perceptual components but also highly coordinated voluntary behaviors (navigation and exploration) and cognitive resources (mental representations of physical and conceptual attributes) [50, 51]. Therefore, the key question is how to efficiently display and coordinate these components making complex haptic information easy to perceive and understand.
In this paper, we report the results of the comparison of two approaches for the haptic visualization and exploration of volumetric shapes in the absence of visual feedback: the free scanning and the parallel scanning with reduced exploratory behavior. The research was aimed to evaluate the applicability and effectiveness of these two approaches.
3. Materials and Method
3.1. The Participants
Eighteen volunteers (ten males and eight females) from the local university participated in this study. The age of the participants ranged from 23 to 29 years with a mean of 24.5 years. None of them had participated in similar experiments before. None of them reported any hearing and sensory-motor problems. All participants were regular computer users and reported being right-hand dominant or using their right-hand most frequently. As it was stated in the introduction, only the sighted people were chosen to enable the evaluation of the benefits and shortcomings of the technique introduced.
The human ability to integrate perceptual information over time and space provides the basis of mental imagery [32, 52]. Nevertheless, in our study, we relied on the fact that the sighted participants had the mental templates (visual-haptic models) of different volumetric objects. To detect specific points, object features, and spatial relations in the absence of visual feedback, an observer should be able to integrate the multiple-touch probes collected in the haptic space. An exploratory strategy is also an important factor and depends on the personal cognitive style of thinking (analytic, holistic, or detail-oriented) and individual haptic experience. Therefore, to facilitate mental processing, haptic information obtained from exploratory patterns should still be structured and firmly synchronized.
The “dimensions” of the haptic space (mapping) may nevertheless be different from the dimensions of the visual space adopted for linkage-based force-feedback devices . To collect a sequence of haptic probes specifying the virtual surface, the person can explore the haptic space through self-directed behavior (or free exploration) using a suitable hardware and software providing corresponding haptic signals. Alternatively, a sequence of haptic signals can be generated and presented to the person during a specified time interval if it would be an actual scanning of the study area in some direction. We called this technique “reduced exploratory behavior” which can also be interpreted as motionless exploratory patterns.
For example, as can be seen from Figure 1, multiple-touch probes of the virtual object (the top view projection of the upper half of the ring torus) are displayed and perceived exclusively along the -axis as a gradient of brightness and corresponding displacements of the points of grasp of the penholder in relation to the pen tip. These multiple-touch probes can be displayed on the time axis and belong to the imaginary section planes . To collect information about the virtual surface, the person should mark only the initial position of the section plane on -axis of the tablet within the slit of the stencil frame and start the scanning process by clicking the left button of the tablet. The displacements of the StickGrip along the -axis could be proportional, for example, to the gray scale (brightness) level of the invisible image (ring torus). This could produce exploratory patterns perceived as the virtual cross-sectional slices without the need to physically scan the profile of each trajectory along the -axis with the Wacom pen.
However, let us imagine a ripple surface. It is clear that the free scanning technique is independent of the orientation of the surface irregularities. This cannot be true for the parallel scanning technique. Nevertheless, by making the exploration of virtual shapes in the absence of visual feedback, we do not consider this interaction in the absence of the computer support. That is, the system/application could analyze spatially distributed (e.g., geophysical) data and manipulate the appearance of the image in an appropriate way so that specific features would become more prominent and distinguishable at the cross-sectional analysis.
In contrast to mimicking the visual space, the haptic exploration of the objects with the virtual sectioning concept and the parallel scanning technique with reduced exploratory behavior would have the following benefits: (i)there is a fixed reference point within each section plane; (ii)all reference points belong to the same axis (), allowing a user to easily correlate exploratory patterns located in parallel sectional planes and to ascertain relationships, dependencies, and tendencies between corresponding segments located in parallel planes and presented at the particular moments of the timeline; (iii)an exploratory pattern can be repeated within a certain timeframe (e.g., in less than 3 s, Figure 1) as many times as needed in order to form a correct mental image of each cross-section slice profile; (iv)the virtual sectioning method can be applied to the entire haptic space, or to a part of the space, to explore one or several objects at a time.
Finally, the new scanning technique with the reduced exploratory behavior would contribute to the research on data visualization in haptic space.
In spite of advances in existing force-feedback techniques, the work reported here was performed using the StickGrip linkage-free haptic device, which presents a motorized pen grip for the Wacom pen input device as shown in Figure 2 .
The point at which the penholder is held is sliding up and down the shaft of the Wacom pen so that as the user explores the virtual surface with the pen, s/he feels that hand being displaced towards and away from the physical surface of the pen tablet (Wacom Graphire-4).
Using the Portescap linear stepper motor (20DAM40D2B-L) did not require any additional gears and led to low noise and equal torque with no differences in directionality of the shaft displacements that might confuse the user. The StickGrip has a range of 40 mm (±20 mm) of the grip displacements with an accuracy of ±0.8 mm for the Wacom pen having a length of 140 mm. The grip displacements with an average speed of about 25 mm/s of the point of grasp in this range give accurate feedback about the distance and direction (closer and further) regarding the surface of the pen tablet, and, consequently, such feedback is a part of the afferent information regarding the heterogeneity of the virtual surface. During preliminary tests of the setup, the two values of 42 and 28 mm/s were adopted for presenting virtual cross-sectional planes in the parallel scanning mode with an accuracy of displacements better than 4%. However, the grip displacements (even at a velocity of 28 mm/s) still constrained the exploration and presentation of the virtual scan-lines when the gradient of deformation of the virtual surface was too high. The distance and direction of the grip displacements were coordinated with the structure of the virtual surface.
The workspace of exploration was bordered with a frame of 60 × 85 mm along the -axis and the -axis. The frame was used to limit unnecessary exploratory movements and redundant haptic information, in order to easily scan the virtual surface performing long strokes between opposite borders in any direction. The virtual surfaces were visualized as 8-bit grayscale images (Figures 4 and 5). Thus the experimenter could monitor the activity of the subjects as indicated in Figure 2 (at the bottom right).
To facilitate spatiotemporal coordination between the StickGrip displacements along the -axis and the timeline corresponding to the -axis of the virtual cross-sectional planes, the users had to rely on auxiliary sound signals. During the virtual scanning, auxiliary signals presented a sequence of short beeps (sine-wave tone pulses of 800 Hz at duration of 65 ms) with an interval of 360 and 240 ms as illustrated by white dots in Figure 3. The start/end points of the virtual trajectory were marked with the tone pulses of 2.8 kHz at duration of 46 ms. The end-point signal appears immediately at the end of the playback of each scanline.
However, the trajectory had a fixed length, and tone pulses were synchronized with the points of records (of the Stick-Grip displacements) along the timeline (Figure 1). Therefore the last interval was shorter as indicated in Figure 3.
The sequence of sound beeps was the same and it was independently of the type of shape. Therefore, participants could not use these sounds to identify the shapes or their features in the absence of haptic feedback. Sound signals were not used during free scanning mode because they could distract and confound the subjects being presented asynchronously with haptic information.
A microphone was used (Figure 2, on the left) to record the subjects’ decisions as well as any comments given after the test. Short wav-files were used to deliver voice prompts to the subjects about the application status (“test on,” “task was completed successfully”).
The same set of ten volumetric images was presented to the subjects in each experimental block. However, the subjects were not aware of the specific shapes, which were potentially going to be presented to them. One or two geometric shapes (cylinder, trapezoidal prism, ball, and torus) or their halves and the ripple surface (10 volumetric images) were explored with the StickGrip haptic device in the absence of visual feedback and identified in the three conditions (experimental blocks).
The three conditions were as follows: (i)the baseline condition named free scanning was the (self-directed) free exploration of the virtual space; (ii)the successive haptic exploration with the reduced exploratory behavior of cross-sectional profiles lying on parallel planes, named parallel scanning, was of the virtual trajectories along the -axis of the Wacom tablet at the scanning velocity of 28 mm/s;(iii)the similar mode of the parallel scanning (with the reduced exploratory behavior) at the velocity of presenting virtual cross-sectional planes of 42 mm/s.
Audio markers accompanied the two conditions of parallel scanning.
To decrease perceptual learning and knowledge transfer, the participants performed the experimental session at once (three blocks). Both the blocks and volumetric images were presented in randomly assigned counterbalanced order.
Detailed verbal instructions were given to the participants regarding the testing procedure. The subjects started and finished the trials by clicking the right button on the tablet. When the subjects were ready to continue the test, they were instructed to press this button again. During the free scanning mode (baseline condition), the subjects were asked to explore the virtual profile of the surface within the workspace (the frame), to recognize and imagine the virtual shape(s), and to identify it (or them). When the participants finished scanning the surface, they were asked to click the right button on the Wacom tablet. Immediately after that, they had to make the decision by speaking it aloud (into the microphone), by giving a verbal description or title of the virtual image.
In another two blocks, the same virtual shape(s) were explored through successive playback of cross-sectional profiles of the virtual surface. To initiate the scanning process of each cross-section, the subjects had to click the left button of the tablet with the left hand (Figure 3) as many times as needed. The virtual trajectories played back at a given speed (42 or 28 mm/s) starting from the points indicated by the subject.
The subjects held the StickGrip like an ordinary pen. Since fast displacements of the StickGrip could slightly deviate the stylus from the intended direction (e.g., see upper tracks across the ball and ripples in Figure 4), subjects were asked to hold the StickGrip in a vertical position.
In general, the starting point could be any location pointed out within the workspace. However, to choose the starting point, the subjects were asked to move the StickGrip only along the left border of the frame. The right border of the frame was always the endpoint of the virtual trajectory. The subjects had to detect and memorize the features of the entire profile of each cross-section, to further integrate them and mentally retrieve the entire surface of the virtual shape(s). At any time when the subjects had a problem recalling the features of the virtual cross-section, they could examine such a region again.
Once the subjects had been instructed, they were briefly allowed to practice with the sequence of needed actions in two conditions by exploring the virtual pyramid with free and parallel scanning modes. The results of these trials were excluded from further analysis.
The experimental session (three blocks) took place in the usability laboratory as shown in Figure 2 (on the left) and lasted less than 60 minutes. The subjects were blindfolded and perceived the virtual space relying on kinesthetic and proprioceptive senses. To accomplish the test, the participants had to complete ten trials in each block of set tasks with no time limit. At the end of the test, they were given sound feedback (“task was completed successfully”). Between trials and blocks, the participants had a short (self-paced) break and could ask any questions. After the test the participants were interviewed about their experiences and problems.
The test was performed according to the ethical standards. Informed consent was obtained in accordance with the guidelines for the protection of human subjects. No private or confidential information was collected or stored.
In order to reduce variance due to individual differences, the experiment was conducted as a within-subjects design in which each participant experienced all volumetric images identified in three conditions. There were four dependent variables: the task completion time of recognizing (by clicking the right button on the Wacom tablet) and identifying the virtual shape (by giving a verbal description or title of the virtual image), number of the virtual cross-sectional profiles (scan-lines) inspected in order to recognize and identify each shape, number of repeated inspections of the same scan-line, and number of volumetric images correctly identified. The top view projection of virtual shapes (10 images) and three conditions of their exploration were considered as independent variables.
The reduced exploratory behavior was expected to improve human performance in recognizing and identifying volumetric images in the absence of visual feedback. Both conditions of exploration (reduced behavior versus free exploration and velocity of virtual scanning) and different levels of complexity of the virtual images (number of objects and elements/attributes, their symmetry and periodicity, and the gradient of the surface discontinuity) could have an impact on human performance.
The human performance was evaluated in terms of the task completion time, number of the virtual cross-sectional profiles (scan-lines) inspected, number of repeated inspections of the same scan-line, false recognition or/and identification (confusion matrices of the shapes presented), and exploratory strategies used. A variable number of components of the virtual image (1, 2 objects or many ripples) allowed us to differentiate the results of image interpretation. We could refer to recognition error when the number of objects was specified incorrectly and refer to identification error when the number of objects was correct but the description or title of the image was inappropriate.
4. Results and Discussion
In total, the results were collected from 540 trials during the haptic exploration of 10 virtual shapes (images) in three conditions (blocks) by 18 subjects. The statistical analysis was performed using SPSS 18 for Windows (Chicago, IL, USA) and Origin-Pro 8.6 for the 3D visualization of exploratory behavior.
4.1. Analysis of Exploratory Strategies
The typical tracks recorded during haptic exploration and identification of virtual shapes in the free scanning mode are presented in Figure 4. Here, we can only demonstrate that during inspection of virtual shapes in the free scanning mode, our blindfolded subjects did not use any specific strategy. By making continuous circular and linear movements (Figure 4), they merely repeatedly scanned the workspace to detect at least the more prominent and global features of the test objects (borders, vertices, convexity, concavity, and flat areas), which probably would better correspond to their own mental representations.
Nevertheless, exploration strategies were influenced by the method, techniques, and shape-related factors: relative position and size of the virtual shape(s) (two cylinders, two hemispheres, and two trapezoidal prisms), their inherent symmetry (ball and torus), and a specific relief having periodicity of the surface gradient (torus, two balls, and ripples) or not (the half ball). Most of the subjects reported that the free scanning mode demanded more cognitive effort to make mental matching of different pieces of trajectories which are separated in space and time. In particular, to determine spatial relationships among pattern components (adjacent edges, their slope, and the direction of slope), these components should be fully analyzed each in a separate location.
The typical tracks recorded during haptic exploration and identification of virtual shapes in the parallel scanning mode are presented in Figure 5. At the beginning of the exploration, the subjects tried to define the number of shapes within the frame relying on a sense of the roundedness, straightness, or flatness of exploratory trajectories and spatial intervals between them. They performed a rough inspection with a greater step between virtual cross-sections (ripples, trapezoidal prism, torus, and cylinder). Then, the subjects actually began their exploration of the workspace in more detail (ball, two balls, and two hemispheres) or just a detailed scanning of the key areas (two cylinders, two balls, and two trapezoidal prisms), which could help to identify the object in question.
Although identification of the ball and the half ball was often unsuccessful (Tables 1, 2, and 3), in the brief interview after the test, 14 subjects (78%) out of 18 reported that the ball, cylinder, half ball, and grooved surface (ripples) were the easiest haptic shapes to identify.
Ten out of 18 subjects (55.6%) reported that they actively used sound beeps to “measure” the length of edges and to build the mental model of the virtual shape (e.g., “3 beeps up, 4 beeps straight, and 3 beeps down”). Three out of these ten subjects preferred the low playback velocity of the virtual trajectory of 28 mm/s.
Three out of 18 subjects (16.7%) immediately after an explanation of the test procedure and a short practice asked for the volume of the sound beeps to be lowered, as they believed that these signals would distract them. For these subjects, the sound volume was lowered by about 20%. At the end of the test, they reported that the sound beeps did not distract them, but that only the start and stop sounds were useful from their point of view. It is likely that these subjects relied on a holistic encoding strategy by capturing each of the cross-sectional trajectories as a whole by making “in-air hand gestures.” These three subjects outperformed the others approaching minimum completion time but with a rather high rate of false identification. However, we need more observations to validate our inferences.
4.2. Evaluation of Human Performance
The goal was to analyze the differences between the two kinds of haptic visualization and exploration of virtual volumetric shapes supposing that mental representations of sighted people are quite similar.
4.2.1. Task Completion Time
By relying on the free scanning technique (a baseline condition), the mean task completion time of recognition and identification of the virtual shape was about 59 s with a standard deviation (SD) of about 19 s, varying from a minimum of 13 s (SD = 11 s) to a maximum of 109 s (SD = 20 s) averaged over all participants. The box plots in Figure 6 show the typical pattern of differences in the individual performance under different conditions of exploration of the virtual geometric shapes.
Figure 7 illustrates the mean time of recognition and identification of the virtual shapes for each of the three exploration conditions averaged over all participants. During the parallel scanning mode at the playback velocity of the virtual trajectory of 28 mm/s, the mean task completion time was about 58 s (SD = 14 s) varying from a minimum of 16 s (SD = 12 s) to a maximum of 77 s (SD = 14 s) averaged over all participants. The number of virtual cross-sectional profiles (scan-lines) inspected varied from a minimum of 5 (SD = 1.9) to a maximum of 9.8 (SD = 1.2) with a mean of about 8.5 (SD = 2.6) averaged over all participants. The number of scan-lines of the same shape (Figure 8) varied from a minimum of 2.9 (SD = 1.7) to a maximum of 13 (SD = 1.2) with a mean of about 8.7 (SD = 2.7). The average number of repeated inspections of the same cross-section profile (scan-line) was about 1 (SD = 0.03) varying from a minimum of 1 to a maximum of 1.1 averaged over all participants.
During the parallel scanning mode at the playback velocity of the virtual trajectory of 42 mm/s, the mean of the task completion time (Figure 7) was about 46 s (SD = 19 s) varying from a minimum of 14 s (SD = 14 s) to a maximum of 89 s (SD = 8 s) averaged over all participants. The number of scan-lines of the same shape varied from a minimum of 2.6 (SD = 2) to a maximum of 12 (SD = 3) with a mean of about 8.5 (SD = 3) averaged over all participants. The average number of repeated inspections of the same scan-line was about 1 (SD = 0.04) varying from a minimum of 0.9 to a maximum of 1.1 averaged over all participants.
A grooved surface (ripples) was only the image that was successfully recognized by all participants with both scanning techniques and with minimum effort. To identify the virtual grooved surface (ripples), the subjects spent on average about 35 s (SD = 21 s) using the free scanning mode. During the parallel scanning mode at the playback velocity of the virtual trajectory of 42 mm/s, they needed significantly less time, only about 16 s (SD = 7 s) on average. The mean number of inspections was about 3 (SD = 1.6), which increased by approximately twofold (5.6, SD = 2.8) when lowering the playback velocity of the virtual trajectory.
The shapes having smooth rounded surfaces required more time to perform their inspection (Figure 7). In particular, using the free scanning technique ball, the two halves of the sphere and a half ball required 71 s (SD = 19 s), 69 s (SD = 16 s), and 66 s (SD = 31 s) on average. Making inspection with parallel scanning mode at the playback velocity of the virtual trajectory of 28 mm/s, the times needed to recognize and identify these shapes were: 67 s (SD = 14 s), 68 s (SD = 12 s) and 57 s (SD = 17 s), respectively. At the playback velocity of the virtual trajectory of 42 mm/s, the task completion time diminished: 63 s (SD = 11 s) for the ball, 53 s (SD = 8 s) for the two hemispheres, and 49 s (SD = 11 s) for the half ball.
As regards task completion time, the results of the paired samples -test revealed a statistically significant difference when the virtual surfaces were explored using the free scanning technique and the parallel scanning of frontal cross-sections at the playback velocity of the virtual trajectory of 42 mm/s: (); the correlation index was high and statistically significant 0.805 (). The difference in exploration of the virtual surfaces in the parallel scanning mode at two velocities (28 and 42 mm/s) was also statistically significant: (), although the correlation index of this parameter was high and statistically significant 0.902 ().
However, the paired samples -test revealed no difference between the free scanning technique and the parallel scanning of frontal cross-sections at the playback velocity of the virtual trajectory of 28 mm/s: (), while the correlation index was high and statistically significant 0.853 ().
4.2.2. Number of Inspections (Scanlines)
Regarding the virtual cross-sectional profiles (scan-lines), the results of the paired samples -test demonstrated that no difference was revealed either for the number of scan-lines () or for the number of repeated inspections and the number of repeated inspections of the same cross-section: () and (), respectively. Thus, the correlation between the numbers of scan-lines at two velocities (28 and 42 mm/s) was high as 0.947 and statistically significant ().
The correlation between numbers of repeated inspections of the same scan-line was also high at 0.953 and statistically significant (). The numbers of repeated inspections of the same cross-section revealed a weak correlation of 0.575 which did not reach statistical significance ().
4.2.3. Analysis of Errors
The analysis of errors made (false recognition and identification) for each of the three exploration modes showed that the mean number of errors was less than 2.5% for 180 trials (10 virtual volumetric images being explored, recognized and identified by 18 subjects). In particular, the mean number of errors was 2.5% (SD = 1.6%) in free scanning mode and 1.9% (SD = 1.2%) and 1.5% (SD = 1.3%) in parallel scanning mode, at the playback velocity of the virtual trajectory of 28 mm/s and 42 mm/s, respectively (Figure 9).
The result of the paired samples -test revealed no differences in human performance in terms of false recognition and identification of the virtual shapes at the playback velocity of the virtual scanlines of 28 mm/s and 42 mm/s (); the correlation index was low and not statistically significant: 0.585 (). The paired differences between errors made during the free scanning mode and parallel scanning at the playback velocity of 42 mm/s and 28 mm/s were statistically significant: () and (). Indices of correlation were also significant: 0.691 () and 0.926 (), respectively.
A further analysis of the confusion matrices of the virtual shapes recognition and identification (Tables 1–3) showed that shapes with different levels of complexity (number of the shapes’ elements, their symmetry and periodicity, and gradient of the surface discontinuity) required different perceptual and cognitive efforts to recognize and distinguish their specific features to integrate them into a coherent mental image.
As can be seen from the tables, false recognition and identification were much affected by the scanning mode and perceptual heterogeneity of the shape boundaries. In particular, careless inspection of the virtual profile of the surface within the workspace could be the reason of recognition errors when the number of objects was specified incorrectly. Another reason could be the growing redundancy of sensory information that can also soon overload the subjects to establish the right relations between successive samples. However, this kind of error was made more often in the free scanning mode (2.01%) than with the use of the parallel scanning technique (1.12%). In Tables 1–3, the thick lines border the error values of recognition.
The shapes having smooth rounded surfaces (the ball, the two hemispheres, and the half ball) were more difficult to distinguish than the cylinder, torus, or the ripple surface and could be a reason for their misinterpretation. These poorly identified objects in the confusion matrices are bordered. The contribution of poorly identified objects was about 4.5% of total errors made in the free scanning mode, 2.9% in the parallel scanning mode at the playback velocity of the virtual trajectory of 28 mm/s, and 2.5% in the parallel scanning mode at the playback velocity of the virtual trajectory of 42 mm/s.
The imaginary surfaces of virtual shapes can be perceived from the virtual trajectories simulated with displacements of the point of grasp of the penholder. During this study, virtual volumetric shapes with different levels of complexity were presented to blindfolded sighted participants using the StickGrip linkage-free haptic device, the virtual sectioning concept, and the parallel scanning technique with reduced exploratory behavior.
The virtual shapes with smooth rounded surfaces (a ball, the two hemispheres, and a half ball) were more difficult to distinguish, and completing their identification required about 70 seconds. These results corroborated the experimental observations which have also been noted in previous studies [21, 35]. The torus and grooved surface (ripples) were easily identified, and their exploration required much less cognitive effort and time (40–15 seconds). However, the case with a ripple surface demonstrated the need for adaptive adjusting of visualization parameters for presentation of the specific features with respect to the robustness and the sensitivity of the technique.
The number of scan-lines inspected in order to recognize and identify the shape and the average number of repeated inspections of the same scan-line revealed no statistically significant difference in the two exploration conditions. The average number of repeated inspections of the same scan-line was about one, while the scanning velocity of the virtual trajectories presenting cross-sectional profiles is a crucial parameter in the parallel scanning technique. At the speed of displacements of the penholder of 42 mm/s, the subjects achieved significantly better results than when scanning velocity was 28 mm/s. Nevertheless, these parameters could be customized or adjusted depending on information presented (e.g., density of the virtual surface irregularities). The speed of displacements of the penholder should be increased and adapted for visualization of volumetric data with a high gradient of spatial discontinuity.
All participants agreed that visualization of exploratory patterns presented as the virtual cross-sectional slices of the workspace was robust and extremely easy to use, which enabled them to create accurate mental images.
In further research, we plan to confirm the universality of the cross-sectional virtual scanning concept and the reduced exploratory behavior using the data sonification.
The authors gratefully acknowledge the support of Finnish Academy Grant 127774. The authors would like to thank the reviewers for their valuable comments and suggestions to improve the quality of the paper.
- P. Boytchev, T. Chehlarova, and E. Sendova, “Virtual reality vs real virtuality in mathematics teaching and learning,” in Proceedings of the WG 3. 1 & 3. 5 Joint Working Conference Mathematics and ICT: a 'golden triangle' (IMICT '07), D. Benzie and M. Iding, Eds., College of Comp. and Inf. Science Northeastern University, Boston, Mass, USA, 2007.
- K. Kahol and S. Panchanathan, “Distal object perception through haptic user interfaces for individuals who are blind,” in Newsletter ACM SIGACCESS Accessibility and Computing, no. 84, pp. 30–33, ACM, New York, NY, USA, 2006.
- H. H. King, R. Donlin, and B. Hannaford, “Perceptual thresholds for single vs. multi-finger haptic interaction,” in Proceedings of the IEEE Haptics Symposium (HAPTICS '10), pp. 95–99, IEEE Haptics Symposium, Washington, DC, USA, March 2010.
- Y. Liu and S. D. Laycock, “A haptic system for drilling into volume data with polygonal tools,” in Proceedings of the Eurographics Association, W. Tang and J. P. Collomosse, Eds., pp. 1–8, EG UK Theory and Practice of Computer Graphics, Cardiff University, 2009.
- N. Magnenat-Thalmann and U. Bonanni, “Haptics in virtual reality and multimedia,” IEEE Multimedia, vol. 13, no. 3, pp. 6–11, 2006.
- S. Mayank, Implementation and evaluation of a multiple-points haptic rendering algorithm [M.S. thesis], the Russ College of Engineering and Technology of Ohio University, 2007.
- W. Yu, R. Ramloll, and S. Brewster, “Haptic graphs for blind computer users,” in Haptic Human-Computer Interaction, S. A. Brewster and R. Murray-Smith, Eds., pp. 41–51, Springer, Berlin, Germany, 2001.
- M. W. A. Wijntjes, T. van Lienen, I. M. Verstijnen, and A. M. L. Kappers, “Look what I have felt: Unidentified haptic line drawings are identified after sketching,” Acta Psychologica, vol. 128, no. 2, pp. 255–263, 2008.
- G. Jansson and K. Billberger, “The PHANToM used without visual guidance,” in Proceedings of the 1st Phantom Users Research Symposium (PURS '99), pp. 27–30, Heidelberg, Germany, 1999.
- J. F. Santore and S. C. Shapiro, “Identifying an object that is perceptually indistinguishable from one previously perceived,” in Proceedingsof the 19th National Conference on Artificial Intelligence (AAAI '04), pp. 968–969, The MIT Press, July 2004.
- A. Withana, Y. Makino, M. Kondo, M. Sugimoto, G. Kakehi, and M. Inami, “Impact: Immersive haptic stylus to enable direct touch and manipulation for surface computing,” Computers in Entertainment, vol. 8, no. 2, article 9, 2010.
- H. Yano, M. Nudejima, and H. Iwata, “Development of haptic rendering methods of rigidity distribution for tool-handling type haptic interface,” in Proceedings of the Eurohaptics Conference- 2005 and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (WHC '05), pp. 569–570, 2005.
- H. Z. Tan, M. A. Srinivasan, C. M. Reed, and N. I. Durlach, “Discrimination and identification of finger joint-angle position using active motion,” ACM Transactions on Applied Perception, vol. 4, no. 2, pp. 1–14, 2007.
- G. von Békésy, Sensory Inhibition, Princeton University Press, 1967.
- F. Vega-Bermudez and K. O. Johnson, “Surround suppression in the responses of primate SA1 and RA mechanoreceptive afferents mapped with a probe array,” Journal of Neurophysiology, vol. 81, no. 6, pp. 2711–2719, 1999.
- R. L. Klatzky, S. J. Lederman, and J. M. Mankinen, “Visual and haptic exploratory procedures in children's judgments about tool function,” Infant Behavior and Development, vol. 28, no. 3, pp. 240–249, 2005.
- T. V. Evreinova, G. Evreinov, and R. Raisamo, “Interpretation of Ambiguous Images Inspected by the StickGrip Device,” in Proceedings of the IADIS International Conference on Interfaces and Human Computer Interaction (IADIS IHCI '11), pp. 209–217, 2011.
- T. V. Evreinova, G. Evreinov, and R. Raisamo, “Estimating topographic heights with the StickGrip haptic device,” in Proceedings of the International Symposium on Multimedia Applications and Processing (MMAP '11), pp. 691–697, September 2011.
- T. V. Evreinova, G. Evreinov, and R. Raisamo, “Haptic visualization of bathymetric data, a case study,” in Proceedings of the Haptic Symposium (Haptics '12), pp. 359–364, 2012.
- T. V. Evreinova, G. Evreinov, and R. Raisamo, “Evaluation of effectiveness of the StickGrip device for detecting the topographic heights on digital maps,” International Journal of Computer Science and Application, vol. 9, no. 3, pp. 61–76, 2012.
- P. Roth, L. Petnicci, and T. Pun, “From dots to shapes: an auditory haptic game platform for teaching geometry to blind pupils,” in Proceedings of the 7th International Conference on Computers Helping People with Special Needs (ICCHP '00), pp. 603–610, 2000.
- G. Evreinov and R. Raisamo, “An evaluation of three sound mappings through the localization behavior of the eyes,” in Proceedings of the 22nd International Conference on Virtual, Synthetic and Entertainment Audio (AES '02), pp. 239–248, New York, NY, USA, 2002.
- P. B. L. Meijer, “An experimental system for auditory image representations,” IEEE Transactions on Biomedical Engineering, vol. 39, no. 2, pp. 112–121, 1992.
- S. Rouzier, B. Hennion, T. P. Segovia, and D. Chêne, “Touching geometry for visually impaired pupils,” in Proceedings of EuroHaptics, pp. 104–109, 2004.
- N. Gronau, M. Neta, and M. Bar, “Integrated contextual representation for objects' identities and their locations,” Journal of Cognitive Neuroscience, vol. 20, no. 3, pp. 371–388, 2008.
- A. Theurel, S. Frileux, Y. Hatwell, and E. Gentaz, “The haptic recognition of geometrical shapes in congenitally blind and blindfolded adolescents: is there a haptic prototype effect,” PLoS ONE, vol. 7, no. 6, Article ID e40251, 2012.
- I. Biederman, “Recognition-by-components: a theory of human image understanding,” Psychological Review, vol. 94, no. 2, pp. 115–147, 1987.
- K. E. Overvliet, J. B. J. Smeets, and E. Brenner, “The use of proprioception and tactile information in haptic search,” Acta Psychologica, vol. 129, no. 1, pp. 83–90, 2008.
- M. Singh, “Modal and amodal completion generate different shapes,” Psychological Science, vol. 15, no. 7, pp. 454–459, 2004.
- S. Ullman, “The visual analysis of shape and form,” in The Cognitive Neurosciences, M. S. Gazzaniga and E. Bizzi, Eds., pp. 339–350, MIT Press, Cambridge, Mass, USA, 1995.
- J. Voisin, G. Benoit, and E. C. Chapman, “Haptic discrimination of object shape in humans: two-dimensional angle discrimination,” Experimental Brain Research, vol. 145, no. 2, pp. 239–250, 2002.
- B. Wu, R. L. Klatzky, and G. D. Stetten, “Mental visualization of objects from cross-sectional images,” Cognition, vol. 123, no. 1, pp. 33–49, 2012.
- E. Foulke and J. S. Warm, “Effects of complexity and redundancy on the tactual recognition on metric figures,” Perceptual and Motor Skills, vol. 25, no. 1, pp. 177–187, 1967.
- A. M. Kappers, J. J. Koenderink, and I. Lichtenegger, “Haptic identification of curved surfaces,” Perception and Psychophysics, vol. 56, no. 1, pp. 53–61, 1994.
- K. van den Doel, D. Smilek, A. Bodnar et al., “Geometric shape detection with soundview,” in Proceedings of the 10th Meeting of the International Conference on Auditory Display (ICAD '04), vol. 47, no. 5, pp. 1–8, 2004.
- T. V. Evreinova, G. Evreinov, and R. Raisamo, “An evaluation of the virtual curvature with the StickGrip haptic device: a case study,” Universal Access in the Information Society, vol. 12, no. 2, pp. 161–173, 2013.
- J. J. Koenderink and A. J. van Doorn, “Surface shape and curvature scales,” Image and Vision Computing, vol. 10, no. 8, pp. 557–564, 1992.
- A. Pichler, R. B. Fisher, and M. Vincze, “Decomposition of range images using markov random fields,” in Proceedings of the 11th International Conference on Image Processing (ICIP '04), pp. 1205–1208, IEEE Computer Society Press, October 2004.
- G. Taylor and L. Kleeman, “Robust range data segmentation using geometric primitives for robotic applications,” in Proceedings of the 9th International Conference on Signal and Image Processing (IASTED '03), pp. 467–472, ACTA Press, August 2003.
- D. Weinshall, “Shortcuts in shape classification from two images,” CVGIP, vol. 56, no. 1, pp. 57–68, 1992.
- A. G. Metec, http://www.metec-ag.de/, 2013.
- E. Lecolinet and G. Mouret, “TACTIBALL,TACTIPEN,TACTITAB Ou comment “toucher du doigt ” les données de son ordinateur,” in Proceedings of the 17th international conference on Francophone sur l'Interaction Homme-Machine (IHM '05), pp. 227–230, ACM, New York, NY, USA, 2005.
- N. Noble and B. Martin, “Shape discovering using tactile guidance,” in Proceedings of EuroHaptics (EH '06), pp. 561–564, 2006.
- T. Pietrzak, A. Crossan, S. A. Brewster, B. Martin, and I. Pecci, “Exploring geometric shapes with touch,” in Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction (INTERACT '09), pp. 145–148, Springer, Berlin, Germany, 2009.
- M. Ziat, O. Gapenne, J. Stewart, and C. Lenay, “Haptic recognition of shapes at different scales: a comparison of two methods of interaction,” Interacting with Computers, vol. 19, no. 1, pp. 121–132, 2007.
- J. S. Chan, T. Maucher, J. Schemmel, D. Kilroy, F. N. Newell, and K. Meier, “The virtual haptic display: a device for exploring 2-D virtual shapes in the tactile modality,” Behavior Research Methods, vol. 39, no. 4, pp. 802–810, 2007.
- M. Stamm, M. E. Altinsoy, and S. Merchel, “Identification accuracy and efficiency of haptic virtual objects using force-feedback,” in Proceedings of the 3rd International Workshop on Perceptual Quality of System (PQS '10), 2010.
- K. U. Kyung, H. Choi, D. S. Kwon, and S. W. Son, “Interactive mouse systems providing haptic feedback during the exploration in virtual environment,” in Proceedings of the 19th International Symposium (ISCIS '04), pp. 136–146, Springer, 2004.
- G. Jansson and K. Larsson, “Identification of haptic virtual objects with different degrees of complexity,” in Proceedings of Eurohaptics 2002 (EH '02), pp. 57–60, University of Edinburgh, Edinburgh, UK, 2002.
- S. Kelter, H. Grötzbach, R. Freiheit, B. Höhle, S. Wutzig, and E. Diesch, “Object identification: the mental representation of physical and conceptual attributes,” Memory and Cognition, vol. 12, no. 2, pp. 123–133, 1984.
- M. A. Symmons, B. L. Richardson, and D. B. Wuillemin, “Components of haptic information: skin rivals kinaesthesis,” Perception, vol. 37, no. 10, pp. 1596–1604, 2008.
- H. Bértolo, “Visual imagery without visual perception,” Psicológica, vol. 26, no. 1, pp. 173–188, 2005.
- G. Evreinov, T. V. Evreinova, and R. Raisamo, “Method, computer program and device for interacting with a computer,” Finland Patent Application, G06F ID, 20090434, 2009.