Situational Awareness and Mixed Initiative Markup for Human-Robot Team Plans - Robotics Institute Carnegie Mellon University
Loading Events

PhD Thesis Defense

July

18
Tue
Nathan Brooks Carnegie Mellon University
Tuesday, July 18
12:30 pm to 1:30 pm
NSH 1305
Situational Awareness and Mixed Initiative Markup for Human-Robot Team Plans

Abstract:
As robots become more reliable and user interfaces (UI) become more powerful, human-robot teams are being applied to more real world problems. Human-robot teams offer redundancy and heterogeneous capabilities desirable in scientific investigation, surveillance, disaster response, and search and rescue operations. Large teams are overwhelming for a human operator, so systems employ high level team plans to describe the operator’s supervisory roles and the team’s tasks and goals. In addition, UIs apply situational awareness (SA) techniques and mixed initiative (MI) invocation of services to manage the operator’s workload. However, current systems use static SA and MI settings which cannot capture changes in the plan’s context or the overall system configuration. The configuration for one domain, device, environment, or section of a plan may not be appropriate for others, limiting performance.

This thesis addresses these issues by developing a team plan language for human-robot teams and augments it with a situational awareness and mixed initiative (SAMI) markup language. SAMI markup captures SA techniques for UI components, MI settings for decision making, and constraints for algorithm selection at specific points in a team plan. In addition, we identify properties of the team plan language and use them to develop semantic and syntactic software agents which aid plan development.

To test the team plan language and markup’s ability to capture complex behavior and context specific needs, we design several experiments in simulation and deploy a large team of autonomous watercraft. Run-time statistics and the team’s ability to adapt to challenges “in the wild” are used to evaluate the effectiveness of the marked up language.

To assess the learnability of the language by non-experts, a user study evaluating a series of self-guided lessons is designed. Users with exposure to computer science concepts complete training material during which task performance and interviews are used to assess the effectiveness and scalability of the material.

These contributions demonstrate an approach to improve the accessibility of human-robot teams and their performance in complex environments.

More Information

Thesis Committee Members:
Paul Scerri, Chair
Illah Nourbakhsh
Reid Simmons
Julie Adams, Oregon State University