In this talk, a research program is described whose goals include developing techniques for non-invasive measurement of multimodal behavior in natural conditions (uncontrolled studies outside the laboratory) and for assessing the time-varying coordination among signals and between signaling entities. These include studies of the production and perception of various types of audiovisual speech and musical performance in which simple optical flow measurement techniques replace more cumbersome and invasive marker-based measurement. A new algorithm for computing time-varying correspondences between such signals is described and used to examine the coordination between musicians and their audience, between visible gestures and audible speech, and between speech production and postural control. Finally, the need to reconcile the dualism inherent in computationally derived event structures and their likely symbolic identification is discussed.
Additional details >>
|The Robotics Institute is part of the School of Computer Science, Carnegie Mellon University.|
Contact Us | Update Instructions