Carnegie Mellon University Robotics Institute Homepage

Carnegie Mellon University Robotics Institute Research Guide

Carnegie Mellon University, Robotics Institute, Research Guide

Human-Robot Interaction

In 2005, the Robotics Institute identified Human-Robot Interaction (HRI) as an “area of emerging and increasing interest” and this has indeed been a growth area for the department. The prior report noted that HRI had primarily dealt with issues of human teleoperation and robotic control. As in the past, these are still difficult challenges attracting attention by the faculty, but the RI has engaged in projects that span an ever expanding and more broadly defined area of HRI.

Researchers in HRI and Robotics are continuing, and extending, collaborations with colleagues in other parts of SCS, especially in HCII. In addition, they are working with outside organizations, such as the ACM/IEEE Human-Robot Interaction Conference, to grow the field of HRI. As was projected in 2005, the area of HRI will continue to gain importance in the coming years, owing especially to increased interest in robots in life applications and assistive technologies. The Institute intends to continue supporting this thrust and continually expand its activities in HRI, including growing its existing collaborations with the HCII.

A considerable amount of HRI work is occurring on interdisciplinary teams and is apparent in other sections (e.g., Graphics, Mobile Robots, Field Robotics, Medical Robotics, Quality of Life Technologies ERC, etc); details of system-specific HRI efforts are deferred to those sections.

Controlling & Managing Robots

Within the domain of controlling robots, the Institute has emphasized ground, air, and other remote systems. Examples of single robot systems include, but are not limited to, management and supervision of autonomous space rovers (Wettergreen) and virtual valet parking of personal vehicles (Steinfeld). Similar HRI challenges are also be explored in the DARPA Transformer TX project (Singh) where the vehicle is autonomous but not unmanned. The vehicle is expected to be flown by a common soldier, not one with special flight skills, thus creating the need for shared control between the operator and vehicle.

Faculty members are also contributing to the challenging area of control of multiple robots, including coordinated remote construction (Simmons, Singh), Unmanned Aerial Vehicles and Unmanned Ground Vehicles by a single human (Sycara, Scerri), and wide area supervisory control of unmanned boat and UAV systems (Dolan). HRI issues within large-scale coordination of multi-agent systems have continued to be a strong research effort within the Intelligent Software Agents Lab (Sycara, Scerri) and HRI is a key component within a MURI (Multi-University Research Initiative) project and a Science of autonomy project in the Intelligent Agents Lab. A key challenge is scaling up control of robot teams, allowing a single operator to control tens or hundreds of robots (Scerri). This can be achieved by building autonomous coordination, using adjustable autonomy, and abstracting the sensor data in useful ways.

In the manufacturing arena, faculty (Bourne) and students are actively researching systems where robots and humans can easily swap the initiative (i.e., leadership role) in task execution. For many applications, humans have superior knowledge and skills because they possess extensive knowledge about the exogenous context, and in these cases the human should possess the task initiative. For example, humans have knowledge about safety concerns, special precautions that need to be taken with delicate equipment or what is not modeled explicitly by an assembly robot. However for other tasks, the robot is in a superior position. For example, an assembly robot can accurately model its own motions and do collision checking with other modeled elements. The robot can map its positions easily into world coordinates so that precise positions can be displayed for the benefit of a cooperative human agent and in many cases the robot simply can “see” a developing situation better from its unique vantage point. Based on the give-and-take of the specific task elements, it is appropriate to allow the task initiative to easily change hands between the robot and human agents. To accomplish this, we are developing augmented reality tools to rapidly communicate directed information from the robot to the human and active vision tools to communicate directed information from the human to the robot.

Social & Task Interaction

Social interaction and robots as collaborators has grown from the Roboceptionist project to a number of new efforts. The Roboceptionist project (Simmons), begun in 2002, continues to be developed. The project, a joint effort with the CMU School of Drama, situates a robot with an expressive, graphical face in a heavily traveled corridor in our building. The robot responds to typed input, providing directions, weather reports, and information about its virtual life and family. Past work looked at how affect (emotion, mood) effected people's interactions with the robot. Recent work has looked at how people's willingness to be “polite” to the robot (saying hello, goodbye) correlates with their methods of interaction (persistence, thanking the robot, etc.).

Simmons and his team are also investigating techniques for making interactive robots culturally identifiable (Simmons, student: Maxim Makatchev). For instance, an Arabic Roboceptionist should be seen by Arabic (and, hopefully, non-Arabic) users as culturally Arabic. This involves differences in behaviors/gestures, posture, as well as dialogue (e.g., Arabic speakers tend to be more verbose and formal than American speakers). Understanding how to incorporate this into an artificial personality is quite challenging. The approach is to determine what properties are most informative, using both published studies on human-human interaction as well as controlled human-robot experiments, and to incorporate those results into a flexible agent framework that we are developing. Results are still fairly preliminary.

Acceptance also becomes important when moving around people. As robots become more ubiquitous, they will increasingly have to share space with humans (Simmons, students: Rachel Kirby & Frank Broz). Rather than having people have to adapt to the robots' styles of navigation, we prefer to have the robots mimic human styles (e.g., passing on the right, waiting for everyone to leave an elevator before entering, waiting at a green light before turning left, etc.). We have developed two techniques for socially acceptable navigation. One uses a model where the task constraints (e.g., getting to a destination as quickly and safely as possible) are coupled with social constraints (e.g., respecting people's personal space) and the problem is cast as constrained optimization.

Much of the non-verbal interaction between people is highly rhythmic in nature (Simmons, student: Marek Michalowski). These rhythms, such as nodding, gesturing in synchrony, etc., help make interactions flow. The team has investigated how an extreme form of rhythmic interaction -- dance -- can be used to engage children. The results show that children are easily able to understand the intent of a robot as it performs dance (e.g., when the child should lead and when he/she should follow). The results have implications in both human-robot dialogue systems as well as autism research. Keepon, the robot used for this research, has made significant inroads into the world's perception of robot behavior. An early YouTube video of Keepon dancing using rhythm software has produced almost 2.5 million views to date.

Rybski has been exploring robot platforms and technologies for human assistance (Rybski). The most recent manifestation of this is Snackbot, a project in collaboration with HCII, which roams the halls serving snacks to building occupants. This provides a research platform for projects in robotics, design, and behavioral science. This work builds on the earlier CMAssist project (Rybski, Veloso) on understanding how robots can work around, interact with, and assist people in natural human environments. In turn, CMAssist grew out of the CALO Physical Awareness project (Rybski, Veloso, De la Torre, Kanade) that was tapering down in 2006.

A recent large NSF award from the team of Veloso, Simmons, Nourbakhsh, Steinfeld, and Rudnicky (LTI) will introduce multiple mobile robots into local buildings to serve as guides, delivery agents, and gophers (Veloso, Simmons, Nourbakhsh, Steinfeld). From an HRI perspective, this project will explore long-term interaction, physical proximity behaviors, and K-12 outreach. Steinfeld has a related project exploring how HRI factors, like reliability and situation awareness, affect human trust in robots and has been advancing the HRI community's understanding of how to measure HRI (Steinfeld).

Human-Computer Interaction

A number of Institute faculty were key contributors to the SCS-wide Reflective Agents with Distributed Adaptive Reasoning (RADAR) project (Steinfeld, Smith, De la Torre, Veloso). This large, multi-year DARPA project was focused on how to build a cognitive assistant that embodies machine learning technology that is able to function “in the wild” – the technology need not be tuned by experts, and that the person using the system that embodies the technology need not be trained in any special way. The integrated system included components from Smith, De la Torre, and Veloso. A key aspect of RADAR was the interaction between the cognitive assistant and the user. This human-in-the-loop system was a central theme to the project, as researchers sought to increase performance by attacking the problem from a human-assistant system perspective. This manifested in annual formal system evaluations (Steinfeld). From late 2005 to 2008, almost 700 “clean” participants were run through experiments designed to test the system (no drop-outs, software crashes, etc). A total of 313 of these used an intelligent version of the software, resulting in over 600 cumulative hours of time on task with a cognitive assistant.

There is additional work aimed at technologies for mixed-initiative development and execution of task plans and schedules by humans and machines. Smith's research uses interactive constraint analysis and search techniques to infer consequences of human planner decisions, monitor and detect deficiencies in the plan, and generate options for extending, revising and improving the plan (Smith). Several systems based on these principles have been developed in specific mission planning and transportation logistics scheduling contexts.

Along similar lines, Mostow has continued to advance the state of the art for intelligent tutoring systems, especially in the application area of reading within Project LISTEN (Mostow). A Braille tutor has been developed by TechBridgeWorld (Dias). Both tutoring systems have been deployed in various communities for use and evaluation. Since 2005, Project LISTEN has been evaluated outside the US and new educational data mining methods and tools have been developed (e.g. in ITS2008 Best Paper).

In addition, Sycara has developed a intelligent agent assistant for military coalition planning and proactive information management based on the agent's inference of the current and future information needs of the user. Additionally, Sycara has developed agents that help coalition planners be aware and also resolve differences in security and operation policies so as to avoid policy violations and achieve high quality plans despite the different policies of the coalition partners.

Communities & Technology

A new area of research within RI involves enabling everyday people to make an impact on their community and fostering citizen science through robotic technology. The CREATE Lab (Nourbakhsh) has produced and deployed a number of systems as a means for disruptively redefining how communities can make sense of their context through the use of robotic technologies. Projects include ChargeCar, the Center for Innovative Robotics, Fine Outreach for Science, GigaPan, Global Connection, Robot 250, and Robot Diaries. As with Keepon, these projects have had significant outreach impact. Outputs include new systems for hobbyists and scientists (GigaPan, TeRK), major visibility for the Institute (GigaPan of Obama inauguration, partnerships with National Geographic, the cover of Nature Magazine, etc), and large-scale community participation (Robot 250, 9,500+ miles of commuting collected by ChargeCar, etc).

Additional research in this domain has also recently begun at the Institute. Citizen science and participation through robotic technologies is a central part of the Rehabilitation Engineering Research Center on Accessible Public Transportation (Steinfeld, see separate Interdisciplinary section). TechBridgeWorld has explored new ways to introduce technologies in communities traditional underserved by robotics and intelligent systems (Dias). Also, Paulos (HCII) recently joined the Institute and is focusing on the intersection of human life, our living planet, and technology (Paulos).

Haptics

Hollis has pioneered a new technology for haptic interaction with virtual and remote environments based on magnetic levitation. The technology was used to perform numerous psychophysical investigations of haptic perception with Klatzky (Psychology) and the team has developed a haptic telerobotic system for explosive ordnance disposal for the U.S. Navy. A spin-off company, Butterfly Haptics, LLC, was created in December, 2007, to commercialize the technology, with its first product announced at SIGGRAPH in August, 2008.

Continue Reading: Machine Learning


Faculty

  1. David
    Bourne

  2. Fernando
    De la Torre

  3. Bernardine
    Dias

  4. Geoffrey
    Gordon

  5. Ralph
    Hollis

  6. Jack
    Mostow

  7. Illah
    Nourbakhsh

  8. Paul
    Rybski

  9. Paul
    Scerri

  10. Reid
    Simmons

  11. Steve
    Smith

  12. Sanjiv
    Singh

  13. Aaron
    Steinfeld

  14. Katia
    Sycara

  15. Manuela
    Veloso


Project Images

  • Snackbot

  • CMAssist

  • Gigapan: Obama's Inaugural Address
    by David Bergman

  • Measuring HRI

  • Reading Tutor

  • MRCS

  • RADAR

  • Roboceptionist

  • Multi-Cultural Interaction

  • Rhythmic Interaction

  • Magnetic levitation haptic device

  • RHI- Robot Human Interaction - Robot Instructing Human


Video

  • Petting the Bunny from Butterfly Haptics

  • More Videos from Butterfly Haptics
     
  • Keepon dancing to Spoon's “I Turn My Camera On”

  • Snackbot Autonomous Navigation and Dialog