Learning in image-guided reaching changes the representation-to-action mapping

Bing Wu, Roberta Klatzky, Damion Michael Shelton, and George D. Stetten
Vision Sciences Society Meeting, May, 2007.


Download
  • Adobe portable document format (pdf) (4MB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
In training trials, subjects reached with a needle to a hidden target in near space using ultrasound guidance. Systematic errors were initially introduced by displacing the ultrasound sensor in depth, relative to the reaching point. Subjects learned to accommodate to the displacement with practice. The improvement, in theory, could be accomplished by (i) adjustment of target-location representation, (ii) learning specific motor responses, or (iii) establishment of a new representation-to-motor mapping. Experiment 1 assessed the first hypothesis. Using a visual matching paradigm, subjects' judgments of target location were measured before and after the training. Although reaching accuracy improved with training, there was no corresponding change in visual matching. Moreover, training did not generalize fully to a new reaching point, as would be expected if learning corresponded entirely to refinement of target location. The results thus gave little support to the hypothesis. In Experiment 2, using a task of aligning a hand-held stylus with a visible line, subjects' motor responses were measured before and after the training. Again needle guidance improved with training, but there was no corresponding change in hand alignment, which contradicts the motor-learning hypothesis. Experiment 3 assessed whether subjects learned a general mapping from representation to action. After training as before, the ultrasound sensor was moved up to the same plane as the reaching entry point, causing a shift in the target position on the image. Generalized re-mapping predicts that the previous motor compensation should be imposed on the new location representation, producing a corresponding error. As predicted, the initial error in subjects' insertion response was very similar to the learned correction during the training. The remapping that resulted from cognitive correction is similar to perceptual effects of prism goggles, where feedback from reaching to one spatial location generalizes to others with corresponding distortion magnitude.

Notes
Associated Center(s) / Consortia: Vision and Autonomous Systems Center and Quality of Life Technology Center
Associated Lab(s) / Group(s): Human-Robot Interaction Group
Associated Project(s): Sonic FlashlightTM
Number of pages: 1

Text Reference
Bing Wu, Roberta Klatzky, Damion Michael Shelton, and George D. Stetten, "Learning in image-guided reaching changes the representation-to-action mapping," Vision Sciences Society Meeting, May, 2007.

BibTeX Reference
@inproceedings{Wu_2007_5889,
   author = "Bing Wu and Roberta Klatzky and Damion Michael Shelton and George D Stetten",
   title = "Learning in image-guided reaching changes the representation-to-action mapping",
   booktitle = "Vision Sciences Society Meeting",
   month = "May",
   year = "2007",
}