January 16, 2020    Michael Henninger

In the Human and Robot Partners (HARP) Lab at Carnegie Mellon University, a robot mounted to a table must choose between three candy dishes. The robot — a sleek, multi-jointed black arm — has a camera mounted to its two-fingered gripper. The machine analyzes the eye gaze of a graduate student seated across the table to determine which of the three types of candy she desires. It then reaches into the correct dish, picks up a piece and delivers it.

That successful human-robotic interaction is the result of years of dedicated research and application by Henny Admoni, an assistant professor at the Robotics Institute at Carnegie Mellon and director of the HARP Lab. Admoni discussed how cognitive psychology, machine learning and robotics are creating the next generation of collaborative machines during a presentation called “The Future of Human-Robot Interaction” at the World Economic Forum’s annual meeting on January 23 in Davos, Switzerland.

“A lot of my work starts with understanding how humans behave,” Admoni said. “Using signals such as facial expression or eye gaze, we can have a robot understand what kind of assistance people need and actually take the right action at the right time. We work both in human psychology to understand human behavior, and on the robotic side by developing algorithms that are responsive to that human behavior.”

Henny Admoni talks about the future of human-robot interaction at Davos.

Admoni’s research focuses on using assistive robots to address different impairments and aid people in living more fulfilling lives. She says that in the accessibility space, robots could provide a caregiver-like experience without experiencing fatigue.

While at the forum, Admoni also participated in the panel, “Caregiving in the New Economy,” and in the session, “The Story Behind the Photo: Robot Caregivers.”

“The domain of accessibility is a really motivating domain to work in, because we have the capacity to meaningfully change people’s lives,” Admoni said.

Eight Ph.D. students and two master’s students work in the HARP lab, in addition to an assortment of visitors and collaborators. The lab focuses on three core projects. The first is physical robot manipulation, with researchers creating methods for the robotic arms to assist in eating tasks. The second deals with AI systems explaining their decision-making process to users. The third focuses on food preparation and delivery.

“Carnegie Mellon is the best place in the world to do robotics. It’s the Disneyland of robotics,” Admoni said. “I can go down the hall and talk to machine learning experts or computer vision experts, and they’ve all thought about how their different fields contribute back to the robotics question. We have the critical mass to do some really groundbreaking research.”


Henny Admoni is the director of the Human and Robot Partners Lab.

Admoni came to Carnegie Mellon to do her post-doc work in the Personal Robotics Lab under Siddhartha Srinivasa, who is now the Boeing Endowed Professor at the University of Washington. Srinivasa moved the lab to Washington state in 2017.

“Henny exemplifies what I strongly believe is the true purpose of robotics — building robots that can provide care for those in need,” Srinivasa said. “Her work is thoughtful in revealing the real issues that human-robot interaction faces, and has influenced the broader community to close the loop between humans and robots.

On the world stage in Davos, Admoni discussed the progression of robot usage in society, and how those robots need to be able to reason and adapt to unexpected situations that arise. In the design and function of a robot, Admoni insists all of the humans that robots may end up affecting need to be considered.

“The World Economic Forum is made up of CEOs, heads of state, and people who think about technological advances at a very high level. They have the capacity to drive the way technology gets done,” Admoni said. “It’s a fantastic opportunity to help shape the way the world thinks about human-robot interaction.”


At the HARP Lab, a robot uses eye gaze to predict a person’s candy choice.