Loading Events

VASC Seminar

August

16
Wed
Metin Sezgin Associate Professor Yale University, Visiting Fellow Koç University, Istanbul
Wednesday, August 16
3:00 pm to 4:00 pm
Interaction through Subtle Communicative Cues

Event Location: NSH 1507
Bio: T. Metin Sezgin graduated summa cum laude with Honors from Syracuse University in 1999. He completed his MS in the Artificial Intelligence Laboratory at Massachusetts Institute of Technology in 2001. He received his PhD in 2006 from Massachusetts Institute of Technology. He subsequently moved to University of Cambridge, and joined the Rainbow group at the University of Cambridge Computer Laboratory as a Postdoctoral Research Associate. Dr. Sezgin is currently an Associate Professor in the College of Engineering at Koç University, Istanbul. His research interests include intelligent human-computer interfaces, multimodal sensor fusion, and HCI applications of machine learning. Dr. Sezgin is particularly interested in applications of these technologies in building intelligent pen-based interfaces. Dr. Sezgin’s research has been supported by international and national grants including grants from European Research Council, and Turk Telekom. He is a recipient of the Career Award of the Scientific and Technological Research Council of Turkey.

Abstract: Speech is the dominant modality in human-human communication. It is supported in subtle ways through other communicative cues (e.g., gestures, eye-gaze, and haptics). These cues, although subtle, play a major role in enriching human-human interaction by communicating complementary information. In this talk, I will present case studies that demonstrate the wide range of information that can be extracted from subtle cues, and will show examples of how human-computer interaction in general, and human-robot interaction in particular, can be enhanced with strategic use of subtle communicative cues. The examples will come from robot-assisted joint manipulation tasks (e.g., carrying a table with the help of a robot), conversational robotic agents, and multimodal interaction using eye-gaze tracking, and pen input.