Learning in image-guided reaching changes the representation-to-action mapping - Robotics Institute Carnegie Mellon University

Learning in image-guided reaching changes the representation-to-action mapping

Bing Wu, Roberta Klatzky, Damion Michael Shelton, and George D. Stetten
Conference Paper, Proceedings of Vision Sciences Society Meeting, May, 2007

Abstract

In training trials, subjects reached with a needle to a hidden target in near space using ultrasound guidance. Systematic errors were initially introduced by displacing the ultrasound sensor in depth, relative to the reaching point. Subjects learned to accommodate to the displacement with practice. The improvement, in theory, could be accomplished by (i) adjustment of target-location representation, (ii) learning specific motor responses, or (iii) establishment of a new representation-to-motor mapping. Experiment 1 assessed the first hypothesis. Using a visual matching paradigm, subjects' judgments of target location were measured before and after the training. Although reaching accuracy improved with training, there was no corresponding change in visual matching. Moreover, training did not generalize fully to a new reaching point, as would be expected if learning corresponded entirely to refinement of target location. The results thus gave little support to the hypothesis. In Experiment 2, using a task of aligning a hand-held stylus with a visible line, subjects' motor responses were measured before and after the training. Again needle guidance improved with training, but there was no corresponding change in hand alignment, which contradicts the motor-learning hypothesis. Experiment 3 assessed whether subjects learned a general mapping from representation to action. After training as before, the ultrasound sensor was moved up to the same plane as the reaching entry point, causing a shift in the target position on the image. Generalized re-mapping predicts that the previous motor compensation should be imposed on the new location representation, producing a corresponding error. As predicted, the initial error in subjects' insertion response was very similar to the learned correction during the training. The remapping that resulted from cognitive correction is similar to perceptual effects of prism goggles, where feedback from reaching to one spatial location generalizes to others with corresponding distortion magnitude.

BibTeX

@conference{Wu-2007-9735,
author = {Bing Wu and Roberta Klatzky and Damion Michael Shelton and George D. Stetten},
title = {Learning in image-guided reaching changes the representation-to-action mapping},
booktitle = {Proceedings of Vision Sciences Society Meeting},
year = {2007},
month = {May},
}