- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Computational Intelligence and Neuroscience
Volume 2010 (2010), Article ID 520781, 12 pages
Learning Arm/Hand Coordination with an Altered Visual Input
1Center for Sensory-Motor Interaction (SMI), Department of Health Science and Technology (HST), Aalborg University (AAU), DK-9220 Aalborg, Denmark
2Faculty of Electrical Engineering, University of Belgrade, Belgrade 11120, Serbia
3Institute for Multidisciplinary Research, Belgrade 11030, Serbia
Received 1 February 2010; Revised 10 May 2010; Accepted 14 June 2010
Academic Editor: Fabio Babiloni
Copyright © 2010 Simona Denisia Iftime Nielsen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The focus of this study was to test a novel tool for the analysis of motor coordination with an altered visual input. The altered visual input was created using special glasses that presented the view as recorded by a video camera placed at various positions around the subject. The camera was positioned at a frontal (F), lateral (L), or top (T) position with respect to the subject. We studied the differences between the arm-end (wrist) trajectories while grasping an object between altered vision (F, L, and T conditions) and normal vision (N) in ten subjects. The outcome measures from the analysis were the trajectory errors, the movement parameters, and the time of execution. We found substantial trajectory errors and an increased execution time at the baseline of the study. We also found that trajectory errors decreased in all conditions after three days of practice with the altered vision in the F condition only for 20 minutes per day, suggesting that recalibration of the visual systems occurred relatively quickly. These results indicate that this recalibration occurs via movement training in an altered condition. The results also suggest that recalibration is more difficult to achieve for altered vision in the F and L conditions compared to the T condition. This study has direct implications on the design of new rehabilitation systems.
Visual information plays an important role in both planning and executing goal-directed movements. When planning the reaching aspect of the “reach to grasp movement,” vision provides information about the object’s properties (shape, size, and position in space) as described in detail many years ago by Jeannerod . During the execution of the action, the proprioceptive system (muscle spindles, Golgi tendon organs, and joint receptors) sends information to the central nervous system, which is then used for estimation of the accuracy of the execution. In parallel, vision provides feedback, which allows corrections if they are required . The performance depends on the level of mastery in executing the movement that follows the learning.
The role of vision during reaching to grasp was studied in detail by either preventing the subject from viewing either only the hand or both the object and the hand during movement (this is often referred to as visual open loop; e.g., [3–5]). The results of previous studies agree that preventing vision during the reaching movement affects movement parameters (i.e., hand-target distance at the initiation of aperture closure, grip aperture amplitude, wrist velocity, and acceleration) and the relationship between those parameters. Movement time tends to increase when visual feedback is impaired, mostly due to a longer deceleration phase of the movement caused by a slower approach to the object [5–9]. This increase in movement time was found when visual feedback was blocked during the entire movement, and not when this feedback was only blocked during the initial part of the movement [5, 9–11], when vision of the hand was blocked [5–7, 12, 13], and when monocular vision was used [8, 14, 15].
The brain can adapt to a variety of distortions of visual feedback when reaching for targets, including rotations and lateral shifts, by adjusting hand movements [16, 17]. This adjusted hand movement can be retained in all subjects after 24 hours  or even a year later . The novel dynamic environment learned for a single movement can be generalized to movements of the same orientation of either increased rate or amplitude .
Cerebrovascular accident (i.e., stroke) often results with paralysis (decreased or complete loss of abilities to manipulate the grasp), but also leading to modified association of proprioceptive and visual information coming to the brain and preventing the brain from sending necessary command signals to the periphery [20, 21]. Therefore, there is a need for stroke patients to relearn how to integrate the preserved mechanisms into a functional reach to grasp movement. This was the motivation to study the learning of new motor coordination skill using dissociated visual and proprioception systems.
This paper presents the analysis of how one learns to make hand movements in a new visuoperceptual association generated by a simple tool for altering visual input. The alteration of visual input was achieved with commercially available computer goggles (Myvu Crystal EV, http://www.myvu.com/) developed for the iPod. The goggles integrate two miniature video monitors into the left and right eye covers. We connected the goggles to the video output of a high-resolution digital camera. Thus, the visual input to the subjects was the image seen by the digital camera.
We analyzed the learning outcome when vision was altered by presenting the scene recorded by the camera placed at three locations around the working space. The analysis of movement errors relates only to the reaching part of the reach and grasp task. The task analyzed was a “reach and grasp a small object”. The execution of the task was grossly divided into successful (objects grasped) and nonsuccessful (object missed). We analyzed the performance on the day one and on day five allowing subjects to practice for three consecutive days with the goggles. This research follows studies related to the so-called perceptual recalibration that takes place when the subject is exposed to altered visual input. It was suggested that when a discrepancy was introduced between the “seen” and “felt” location of an object , performance suffered. However, the sensory systems rapidly adapt to this discrepancy, returning perception and performance to near normal. Interestingly, subsequent removal of the discrepancy leads to a decrease in performance, known as the so-called Negative Aftereffect . One of the suggestions is that this adaptation consists of “recalibrating” the transformation between the visual and proprioceptive perception of spatial location  because visuomotor adaptation is a perceptual recalibration that depends on the subject’s familiarity with the trajectory .
Ten healthy volunteers (mean age: years; range 25–35 years) with no history of neuromuscular or visual disorders participated in the experiment. All subjects signed the informed consent prior to the experimental sessions. The investigation complied with the declaration of Helsinki and was approved by the Local Ethics Committee.
The subjects sat comfortably in a chair in front of a standard large desk covered with a black cloth (Figure 1). The trunk was fixed by a belt to the back support of the chair to minimize the motion of the shoulder during the reach to grasp tasks. The height of the chair was adjusted to allow for motion of the hand just above the table surface. The experiments were performed with the right (dominant) hand only. At rest (initial position of the hand), the elbow was flexed at about and the shoulder at about . Three colored circles with an 8 cm diameter were fixed to the cloth within the workspace; these represented the initial hand position (green—1), contralateral target (red—2), and ipsilateral target (blue—3). Distances between the circles were adjusted individually for each subject, so that subjects could comfortably reach them without fully extending their elbow. The range of distances was 35 to 50 cm.
The subjects’ altered vision was created by positioning the camera in front () of the subject, providing a mirror-like view, lateral () view from the right side, and top () view where the camera was recording from the position above the table. The camera projected the image from these viewpoints to the goggles (Figure 1, insert). The experimental procedure was the following. The reaching (manipulation) task that we studied comprised the following four sequence of activities: (1) move the hand from the initial position (1—Figure 1) to the small object placed at the contralateral target (2—Figure 1), (2) grasp a small cylinder (D = 2 cm, H = 1 cm) placed on the contralateral target, and move it to the ipsilateral target (3, Figure 1), (3) return the object back to the contralateral target (2) and release object, and (4) return the hand to the initial position. The subjects were instructed to go through all four sequences even if they failed to grasp the object in sequence 2. This “fail to grasp” case was treated as unsuccessful in later analysis. Subjects were also instructed to stop between sequences for about 2 seconds to allow clear separation of sequences in later analysis.
The analysis followed the protocol depicted in Figure 2.
The recordings on Day 1 were used as the baseline assessment. The session comprised three 30-second trials under all conditions with the altered vision (F, , and ) and normal vision. In each trial the subject was asked to repeat the task as many times as he could. In most cases the subject accomplished the task three times. This provided in average nine data sets for each condition.
Days 2, 3, and 4 (Figure 2) were allocated for training, which consisted of performing the task for 20 minutes under the F condition of altered vision. The decision to allow subjects to practice only in one condition was made to allow analysis of the effects of practice on the performance of the movement. In this way, the performance for movements that were practiced (F condition) could be compared with the performance of those that were not ( and conditions).
The final evaluation was on Day 5 with the same protocol as the one described for Day 1.
2.3. Data Acquisition
The kinematics of arm-end point during reach and grasp activities were recorded in the Human Performance Lab at the Center for Sensory-Motor Interaction, Aalborg University, using a motion capture system (ProReflex MCU240, Qualisys, SE) with six cameras mounted on the tripods and positioned around the workspace. Two markers were placed on the lateral and medial aspects of the wrist. The marker positions were acquired at 50 Hz using Qualisys Track Manager (Qualisys, SE) and then exported to Matlab. The 3D trajectory of the wrist joint was calculated as the mean of the two recorded marker trajectories and was then projected onto the plane coincident with the surface of the table to obtain the resulting 2D trajectory of the movement. The calculated signal was filtered using a second-order dual-pass Butterworth filter at a cutoff frequency of 10 Hz based on previously published literature [25, 26].
2.4. Data Analysis
We distinguished between performed and unsuccessful tasks when the subject failed to grasp the object. This unsuccessful case was termed “no pick-up” error, and the measure was the number of “no pick-up” errors. The evaluation of the successful trials comprised analysis of the following: (1) End-point Error—EE, (2) sequence parameters, and (3) time of execution. (1)End-Point-Errors (EEs). An end-point error (EE) was defined as the distance between the reference point (the center of the circle in the workspace) and the actual end point of the trajectory for sequences 1, 2, and 3. We distinguished between the contralateral 1 EE (end of sequence 1), ipsilateral EE (end of sequence 2), and contralateral 2 EE (end of sequence 3).(2)Sequence Parameters. Peak velocity (PV), acceleration phase duration (AD), and deceleration phase duration (DD) were computed for each of the four sequences. PV was defined as the highest point on the velocity profile. AD was defined as the time from the onset of sequence movement to the time of peak velocity. DD was defined as the time from the peak velocity to the end of the sequence movement. Sequence movement onset and the end of the sequence movement were defined as times when the velocity was higher or lower than 5% of the peak velocity, respectively.(3)Time of Execution (TE). TE was defined as the total duration of the complex four-sequence movement.
2.5. Statistical Analysis
A one-way repeated-measures analysis of variance (ANOVA) was used to assess differences in errors between the first and fifth day. Significant differences were determined by the Student-Newman, Keuls test for multiple comparisons. The outcomes were declared significant at .
Figure 3 presents representative trajectories of one subject for all conditions on Days 1 and 5 on the left and right plots, respectively.
Note that on Day 1, the trajectories were scattered within the workspace, especially for conditions and . Furthermore, the end points of the individual sequences often ended up outside of the reference circles. On Day 5, the trajectories were more consistent, and the end points accumulated within or in close proximity to the reference circles.
The latter is also demonstrated in Figure 4, which shows only the end points of the sequence trajectories on Days 1 and 5. On Day 5, the end-point clusters were less spread out, and their centers converged more towards the reference positions.
Figure 5 presents the overall end-point errors (EEs) for the four experimental conditions for both Day 1 and Day 5. The plots (from top to bottom) show the statistical data for contralateral 1 EE, contralateral 2 EE, and ipsilateral EE, respectively. These results show significant differences between Day 1 and Day 5 (for , , and ) and between the and , and conditions (). The contralateral 1 EE was higher before the training sessions than after, and this difference was statistically significant for the (, ), (, ), and (, ) conditions. In addition, contralateral 1 EE was greater for the , , and conditions on Day 1 than the EE for the condition, and this difference was statistically significant for the , , and conditions (, ); (, ); (, ). On Day 5, the errors for the , , and conditions became comparable with those for the condition.
We show one result for the -condition because there was no difference in the recordings between Days 1 and 5. There was no Negative Aftereffect .
The contralateral 2 EE was higher before the training sessions than after, and this difference was statistically significant for the (, ) and (, ) conditions. In addition, contralateral 2 EE for the , , and conditions was greater on Day 1 than the EE for the condition, and this difference was statistically significant for the and conditions (, ; (, ). On Day 5, the errors for the , , and conditions became comparable with those for the condition.
The ipsilateral EE was also higher before the training sessions than after, and this difference was statistically significant for the (, ) and (, ) conditions. In addition, the ipsilateral EE for the , , and conditions was greater on Day 1 compared with the EE for the condition, and this difference was statistically significant for the and conditions (, ; (, ). On Day 5, the errors for the , , and conditions became similar to those for the condition.
Table 1 summarizes the incidence of failed pick-up of the object for all conditions on Days 1 and 5. On Day 1, the no pick-up number was 47 out of 215 trials, whereas on Day 5 this occurred only 6 times out of 245 trials.
Figure 6 depicts the velocity profiles for one representative subject on Day 1 and on Day 5 under the four experimental conditions. Note that the velocities on Day 1 had unusual shapes (e.g., wavy and/or multimodal profiles). On Day 5, the velocities had near symmetrical bell-shaped profiles typical of normal reaching movements. Table 2 summarizes the movement parameters for the whole group under the four experimental conditions for Days 1 and 5. PV was higher for Day 5 than for Day 1 for all conditions and all movement sequences.
For all conditions with altered vision, there was an obvious decrease in AD and DD on Day 5 compared with Day 1 (Table 2). On Day 5, these values became comparable with those for the condition. For all conditions with altered vision, there was an evident decrease in TE on Day 5 compared with Day 1 (Table 2). On Day 5, the value of TE for the , , and conditions became comparable with those for the condition.
4.1. Poor Performance under Altered Vision
Figure 3 demonstrates that the altered visual input significantly affected the performance on Day 1. Subjects showed very poor performance for the and conditions and the trajectories scattered, covering almost the entire workspace. The poor performance for the and conditions observed qualitatively from the trajectory traces (Figure 3) is consistent with the high EE values for these conditions compared with condition , as illustrated in Figure 5. The largest values for the contralateral 1 EE and ipsilateral EE were observed in the condition, whereas the contralateral 2 EE reached a maximum value in the condition. This suggests that altered vision in the condition had the greatest effect on sequences 1 and 2, whereas altered vision in the condition mostly affected sequence 3 of the movement.
These observations are consistent with the scattered end points shown in Figure 4. On Day 1, the end points of the trajectories for the and conditions were scattered over a large area of the workspace outside of the reference circles. On the other hand, the end points for condition were clustered together, within or very close to the reference circles, which suggests a lower level of dissociation of visual input and proprioception compared with the and conditions. This observation was also confirmed by the questionnaire that subjects filled out after the experiment. The subjects ranked condition as the most difficult, followed by the and conditions.
The high trajectory errors and end-point variability on Day 1 were accompanied by an increase in AD, DD, and TE and a decrease in PV for the , , and conditions, as presented in Table 2. Note that the time of execution (TE) was shorter in (7, 26 s) than in the (9, 77 s), (9,17 s), and (9,02 s) conditions. When visual input is altered, subjects often use a strategy of slowing down the movement to ensure accurate reach and grasp . When the visual feedback of movement is presented on a screen, the movement accuracy decreases and the movement time increases .
Movement time tends to increase when visual feedback is reduced (e.g., when the vision was occluded at four different latencies from onset of the reach, as shown by Winges et al. ), mostly due to a longer deceleration phase of the movement caused by a slower approach to the object [5–9]. Indeed, the deceleration phase duration (DD) values for the group (Table 2) were longer in the (1.1, 1.3, 1.4, 1.1 s) and (1.0, 1.1, 1.0, and 0.94 s) conditions compared to (0.81, 0.85, 0.83, and 0.84 s) condition for all four sequences of the movement. An increase in the duration of the deceleration phase (DD) of the reach was also found when visual feedback was blocked during the entire movement or only during the initial part of the movement [5, 9–11], when vision of the hand was blocked [5–7, 12, 13], or when monocular vision was used [8, 14, 15]. Note also that the acceleration phase (AD) for the and conditions lasts longer than in the condition. This is presented in Figure 6 for a representative subject and in Table 2 for the whole group.
The presented data further extend the findings of Van Opstal and Van Gisbergen ; Sivak and MacKenzie ; Chieffi and Gentilucci ; and Berthier et al.  by showing that altered vision leads to a decrease in the peak velocity for all conditions and all sequences and that the bell-shaped velocity profile is absent in the and conditions, as illustrated for a representative subject in Figure 6. These changes in the nature of the velocity profile on Day 1 with respect to unaltered vision were accompanied by a greater total duration of each sequence (3.5 s on Day 1 compared with 1.5–2.0 s on Day 5). These observations are consistent with those for the whole group, as shown in Table 2.
4.2. Fast Learning
Subjects’ performance even increased across trials on Day 1 as shown in Figure 3. For the representative subject presented in Figure 3, contralateral 1 EE decreased from 54 mm for trial 1 to 46 mm for trial 9, contralateral 2 EE decreased from 86 mm for trial 1 to 67 mm for trial 9, and ipsilateral EE decreased from 96 mm for trial 1 to 81 mm for trial 9 ( condition). This suggests that fast learning (recalibration) occurred but that adaptation remained incomplete. The performance improved on Day 5 due to trial-by-trial learning on Days 2, 3, and 4. The sensory systems rapidly adapted to the disrupted visual feedback, returning perception and performance to near normal. This finding might suggest that two correction mechanisms are involved in trajectory amendment: an initial mechanism that produces a quick but approximate reduction of spatial error between terminal hand position and target position and a complementary mechanism that leads to a progressive refinement and optimization of the trajectory through practice .
On Day 5, EE and no pick-up number decreased for all altered vision conditions as presented in Figure 5 and Table 1. This decrease translates to an increased ability of the subject to control the hand trajectory during the reach to grasp task. The improved performance was found for all views, although the training was performed for only one altered vision condition (-view). The values became comparable with the values that are typical for the condition in all conditions. This suggests that dissociation of the proprioception and vision introduced with the goggles was minimized with short learning and that recalibration occurred even for the views that were not practiced. This follows the results presented by Baraduc and Wolpert  who reported that the brain quickly adapted to a variety of distortions of visual feedback of the hand when reaching for targets, including rotations and lateral shifts in the field of view, by adjusting hand movements.
Note that on Day 5 the variability of the trajectory end points decreased (full circles on Figure 4). The trajectory end points were clustered within the more narrow area; for sequence 1, almost all trajectory end points were inside the circle, whereas for sequence 2 some of the end points were still outside the circle. Improvement of terminal accuracy was associated with a change in kinematics parameters. The duration of the acceleration and deceleration periods and the time of execution decreased during the final trials. After practice, the time of execution decreased for the (from 9.77 s on Day 1 to 8.24 s on Day 5) and conditions (from 9.17 s on Day 1 to 7.82 s on Day 5). Concomitant with the reduction in the time of execution, there was a progressive increase of the peak velocity. The velocity profiles for the , , and conditions became comparable with those for the condition (bell shaped profile ), as shown in Figure 6.
A similar pattern (decrease in the trajectory errors and increase in peak velocity from Day 1 to Day 5) was obtained in the and conditions in contrast to the condition. This result suggests that although a general learning occurred, it was not at the same level for all views.
4.3. Visuomotor Skill Acquisition or Perceptual Recalibration?
Video-controlled reaching tasks represent a complex and original visuomotor situation because there is a discrepancy between the working and visual spaces, implying more elaborate processing of spatial information .
We analyzed subjects’ performance on the first and the fifth days of goggle use to assess their ability to learn reaching and grasping with an altered visual input. We tested how the CNS deals with imposed artificial visual feedback compared with normal visually guided reaching. On Day 1, the altered vision resulted in worse performance than normal vision. However, by Day 5, the sensory systems had adapted to the discrepancy, returning perception and performance to near normal.
One of the suggestions of these results is that this adaptation consists of “recalibrating” the transformation between the visual and proprioceptive perception of spatial location . Perceptual recalibration appears to involve a global topological realignment, in the sense that alterations within a trained region of space are generalized to other untrained regions . This is supported by our results showing an improved performance on Day 5 for the condition, although this condition was not used for training.
We do not assume that perceptual recalibration (a coordinative remapping between different perceptual representations such as vision and proprioception) and visual-motor skill acquisition (a task-dependent adjustment of the motor response to compensate for a manipulation of the working environment) are mutually exclusive [31–34]. On the contrary, we hypothesize that both occur; yet it is difficult to estimate the relative contribution of each of them over the course of the adaptation period.
When visuomotor discrepancies occur, feedback that is perceived to be coincident with the limb is registered as an internal error, leading to the induction of a perceptual recalibration. Feedback that is not perceived to be physically coincident with the limb is registered as an external error, leading to the reduction of error during exposure . It is also possible that the perception of the error as internal or external in origin might lead the subject to rely preferentially on either egocentric or allocentric cues for the guidance of movement . Studies have shown that there is a functional interaction between the two frames of reference and that this interaction can be affected by experimental conditions [36, 37].
Our results suggest that the difference between altered vision tasks and normal visually guided reaching leads to an adaptation in the form of perceptual recalibration, where proprioception is calibrated in terms of the visual system. If the adaptation is expected to take the guise of a more cognitive, problem-solving process, we can refer to this as the visual-motor skill acquisition. Future studies are warranted to further explore this issue. The ability to predict with some confidence which of these two types of adaptation a peripheral manipulation would allow for a prediction of whether significant improvement is likely to occur on training, how persistent the adaptation will be, and whether it will result in Aftereffects .
One of the envisioned applications of the results of this study is for rehabilitation of stroke patients. In stroke patients, a dissociation of proprioception and vision is caused due to the impaired sensory-motor systems. The accepted approach for effective therapy suggests intensive repetitive exercise, being possibly augmented with assistive systems such as functional electrical stimulation  or assistant robots . These therapies allow patients to train performing functional movements and learn new strategies of optimal use of preserved sensory-motor mechanisms. This training could be understood as the process of recalibration of the natural control system. The results of this study show that in healthy individuals, this recalibration is fast and effective.
The other application that is envisioned relates to the inclusion of cognitive vision in the control loop for transradial prosthesis [41, 42]. In this case, the camera is integrated into artificial hand; therefore, the camera moves and generates the altered visual input, which the controller needs to adapt to.
In this paper, we presented an effective, yet simple new tool for altering visual input when studying motor coordination of reaching during the reach to grasp task. The results show that this alteration of visual input can be graded and, hence, allow for the study of different concepts of learning of the movement.
This study partly confirms the negative aftereffect acting after perceptual recalibration due to altered visual input. Namely, the results confirm that the learning of a new skill and perceptual recalibration acted with different proportion during the adaptation period. However, we need to restate that the learning of a new task has not disrupted the previous skills (normal condition); therefore, suggesting no negative aftereffect.
The Ministry of Science, Serbia (Project no. 175016) and HUMOUR Project (no. 231724) are acknowledged for their support.
- M. Jeannerod, “Intersegmental coordination during reaching at natural visual objects,” in Attention and Performance IX, J. Long and A. Baddeey, Eds., pp. 153–168, Erlbaum, Hillsdale, Mich, USA, 1981.
- M. Jeannerod, “Visuomotor channels: their integration in goal-directed prehension-,” Human Movement Science, vol. 18, no. 2-3, pp. 201–218, 1999.
- M. Jeannerod, “The timing of natural prehension movements,” Journal of Motor Behavior, vol. 16, pp. 235–254, 1984.
- L. S. Jakobson and M. A. Goodale, “Factors affecting higher-order movement planning: a kinematic analysis of human prehension,” Experimental Brain Research, vol. 86, no. 1, pp. 199–208, 1991.
- L. F. Schettino, S. V. Adamovich, and H. Poizner, “Effects of object shape and visual feedback on hand configuration during grasping,” Experimental Brain Research, vol. 151, no. 2, pp. 158–166, 2003.
- N. E. Berthier, R. K. Clifton, V. Gullapalli, D. D. McCall, and D. J. Robin, “Visual information and object size in the control of reaching,” Journal of Motor Behavior, vol. 28, no. 3, pp. 187–197, 1996.
- J. D. Connolly and M. A. Goodale, “The role of visual feedback of hand position in the control of manual prehension,” Experimental Brain Research, vol. 125, no. 3, pp. 281–286, 1999.
- S. J. Watt and M. F. Bradshaw, “Binocular cues are important in controlling the grasp but not the reach in natural prehension movements,” Neuropsychologia, vol. 38, no. 11, pp. 1473–1481, 2000.
- S. A. Winges, D. J. Weber, and M. Santello, “The role of vision on hand preshaping during reach to grasp,” Experimental Brain Research, vol. 152, no. 4, pp. 489–498, 2003.
- S. R. Jackson, G. M. Jackson, and J. Rosicky, “Are non-relevant objects represented in working memory? The effect of non-target objects on reach and grasp kinematics,” Experimental Brain Research, vol. 102, no. 3, pp. 519–530, 1995.
- L. F. Schettino, S. V. Adamovich, W. Hening, E. Tunik, J. Sage, and H. Poizner, “Hand preshaping in Parkinson's disease: effects of visual feedback and medication state,” Experimental Brain Research, vol. 168, no. 1-2, pp. 186–202, 2006.
- M. Gentilucci, I. Toni, S. Chieffi, and G. Pavesi, “The role of proprioception in the control of prehension movements: a kinematic study in a peripherally deafferented patient and in normal subjects,” Experimental Brain Research, vol. 99, no. 3, pp. 483–500, 1994.
- A. Churchill, B. Hopkins, L. Rönnqvist, and S. Vogt, “Vision of the hand and environmental context in human prehension,” Experimental Brain Research, vol. 134, no. 1, pp. 81–89, 2000.
- P. Servos, M. A. Goodale, and L. S. Jakobson, “The role of binocular vision in prehension: a kinematic analysis,” Vision Research, vol. 32, no. 8, pp. 1513–1521, 1992.
- S. R. Jackson, C. A. Jones, R. Newport, and C. Pritchard, “A kinematic analysis of goal-directed prehension movements executed under binocular, monocular, and memory-guided viewing conditions,” Visual Cognition, vol. 4, no. 2, pp. 113–142, 1997.
- P. Baraduc and D. M. Wolpert, “Adaptation to a visuomotor shift depends on the starting posture,” Journal of Neurophysiology, vol. 88, no. 2, pp. 973–981, 2002.
- K. Yamamoto, D. S. Hoffman, and P. L. Strick, “Rapid and long-lasting plasticity of input-output mapping,” Journal of Neurophysiology, vol. 96, no. 5, pp. 2797–2801, 2006.
- J. W. Krakauer, C. Ghez, and M. F. Ghilardi, “Adaptation to visuomotor transformations: consolidation, interference, and forgetting,” Journal of Neuroscience, vol. 25, no. 2, pp. 473–478, 2005.
- S. J. Goodbody and D. M. Wolpert, “Temporal and amplitude generalization in motor learning,” Journal of Neurophysiology, vol. 79, no. 4, pp. 1825–1838, 1998.
- Y. Rossetti, G. Rode, and D. Boisson, “Implicit processing of somaesthetic information: a dissociation between where and how?” NeuroReport, vol. 6, no. 3, pp. 506–510, 1995.
- M. P. M. Kammers, I. J. M. van der Ham, and H. C. Dijkerman, “Dissociating body representations in healthy individuals: differential effects of a kinaesthetic illusion on perception and action,” Neuropsychologia, vol. 44, no. 12, pp. 2430–2436, 2006.
- R. J. Van Beers, A. C. Sittig, and J. J. Gon, “Integration of proprioceptive and visual position-information: an experimentally supported model,” Journal of Neurophysiology, vol. 81, no. 3, pp. 1355–1364, 1999.
- C. Kaernbach, L. Munka, and D. Cunningham, “Visuomotor adaptation: dependency on motion trajectory,” in Dynamic Perception, R. Würtz and M. Lappe, Eds., pp. 177–182, Infix, St. Augustin, Fla, USA, 2002.
- F. L. Bedford, “Keeping perception accurate,” Trends in Cognitive Sciences, vol. 3, no. 1, pp. 4–11, 1999.
- I. Pennel, Y. Coello, and J.-P. Orliaguet, “Frame of reference and adaptation to directional bias in a video-controlled reaching task,” Ergonomics, vol. 45, no. 15, pp. 1047–1077, 2002.
- R. Germain, F. Boy, J. P. Orliaguet, and Y. Coello, “Visual and motor constraints on trajectory planning in pointing movements,” Neuroscience Letters, vol. 372, no. 3, pp. 235–239, 2004.
- C. Ferrel, D. Leifflen, J.-P. Orliaguet, and Y. Coello, “Pointing movement visually controlled through a video display: adaptation to scale change,” Ergonomics, vol. 43, no. 4, pp. 461–473, 2000.
- A. J. Van Opstal and J. A. M. Van Gisbergen, “Skewness of saccadic velocity profiles: a unifying parameter for normal and slow saccades,” Vision Research, vol. 27, no. 5, pp. 731–745, 1987.
- B. Sivak and C. L. MacKenzie, “Integration of visual information and motor output in reaching and grasping: the contributions of peripheral and central vision,” Neuropsychologia, vol. 28, no. 10, pp. 1095–1116, 1990.
- S. Chieffi and M. Gentilucci, “Coordination between the transport and the grasp components during prehension movements,” Experimental Brain Research, vol. 94, no. 3, pp. 471–477, 1993.
- F. L. Bedford, “Perceptual and cognitive spatial learning,” Journal of Experimental Psychology, vol. 19, no. 3, pp. 517–530, 1993.
- J. R. Lackner and P. Dizio, “Rapid adaptation to Coriolis force perturbations of arm trajectory,” Journal of Neurophysiology, vol. 72, no. 1, pp. 299–313, 1994.
- T. A. Martin, J. G. Keating, H. P. Goodkin, A. J. Bastian, and W. T. Thach, “Throwing while looking through prisms II. Specificity and storage of multiple gaze-throw calibrations,” Brain, vol. 119, no. 4, pp. 1199–1211, 1996.
- G. M. Redding and B. Wallace, “Adaptive spatial alignment and strategic perceptual-motor control,” Journal of Experimental Psychology, vol. 22, no. 2, pp. 379–394, 1996.
- D. M. Clower and D. Boussaoud, “Selective use of perceptual recalibration versus visuomotor skill acquisition,” Journal of Neurophysiology, vol. 84, no. 5, pp. 2703–2708, 2000.
- M. Gentilucci, S. Chieffi, E. Daprati, M. C. Saetti, and I. Toni, “Visual illusion and action,” Neuropsychologia, vol. 34, no. 5, pp. 369–376, 1996.
- M. Gentilucci, E. Daprati, M. Gangitano, and I. Toni, “Eye position tunes the contribution of allocentric and egocentric information to target localization in human goal-directed arm movements,” Neuroscience Letters, vol. 222, no. 2, pp. 123–126, 1997.
- R. B. Welch and A. C. Sampanes, “Adapting to virtual environments: visual-motor skill acquisition versus perceptual recalibration,” Displays, vol. 29, no. 2, pp. 152–158, 2008.
- M. B. Popović, D. B. 30. Popović, T. Sinkjær, A. Stefanović, and L. Schwirtlich, “Clinical evaluation of functional electrical therapy in acute hemiplegic subjects,” Journal of Rehabilitation Research and Development, vol. 40, no. 5, pp. 443–453, 2003.
- V. S. Huang and J. W. Krakauer, “Robotic neurorehabilitation: a computational motor learning perspective,” Journal of NeuroEngineering and Rehabilitation, vol. 6, article no. 5, 2009.
- D. j. Klisić, M. Kostić, S. Došen, and D. B. Popović, “Control of prehension for the transradial prosthesis: natural-like image recognition system,” Journal of Automatic Control, vol. 19, no. 1, pp. 27–31, 2009.
- S. Došen and D. B. Popović, “Transradial prosthesis: artificial vision for control of prehension,” Artificial Organs. In press.