Henny Admoni once thought she’d be a journalist. Instead, she’s the latest addition to the Robotics Institute’s faculty — poised to change the way we interact with robots forever.
Henny Admoni thought she’d be a journalist when she started college. Lucky for the future of robotics, a key required class was full that first semester, so she took computer programming instead. Flash forward a decade, and she’s the latest addition to the faculty in Carnegie Mellon University’s Robotics Institute — poised to change the way we interact with robots forever.
Admoni might have entered Wesleyan University with the intent to major in journalism, but science and research ran deep in her blood. The daughter of a medical doctor, Admoni relocated from her native Israel to the U.S. when she was just four years old, and the family eventually settled in Long Island. There, she attended a high school with a science research course that taught the basics — investigating a problem, keeping a research notebook, designing posters. The summer before senior year, students usually worked in a university’s research lab.
For Admoni, that internship was with the NYU Psychology Department, where she tried to understand how people perceive human faces. The internship turned into a science fair submission that took her the whole way to international competition — which makes her transition from budding journalist to computer scientist seem more natural than shocking.
“After journalism went away, I looked at psychology or neuroscience research, but I was frustrated because it’s hard to say really precise things about what you find,” Admoni said. But computer science was different, because you build the system from the ground up and know exactly what’s happening. “I thought, ‘This is cool. I can bring these fields together.'”
And she did.
Admoni defined her own major, computational cognitive science, and investigated the methods and tools used to model human cognition. After four years, she wanted more. So she stayed at Wesleyan for a fifth year and earned a master’s degree. Still not satisfied, she joined Yale’s Social Robotics Lab, where she spent six years trying to improve how people and robots collaborate.
“I looked at how people use nonverbal communication like eye gaze and gestures to direct other peoples’ attention or communicate information,” she said. “I tried to model that, to have robots that both recognized nonverbal behavior from people and generated nonverbal behavior so people could understand them better — all with the idea that these robots would work with you as teachers, educators or coaches.”
After she finished her Ph.D. at Yale, Admoni wanted to expand the work she’d been doing on human-robot interaction into the realm of manipulation — a completely different kind of interaction than her work on socially assistive robots. As a post-doc at CMU, she began studying physically assistive robots that actually moved items in the world. She worked mostly with the Assistive Dexterous Arm (ADA) that might one day help people with motor impairments eat.
When offered the opportunity to become a faculty member in the Robotics Institute, she jumped at the chance.
“Being at CMU helped me expand my repertoire from socially assistive robots to physically assistive robots,” she said. “My lab is all about that — looking at robots that help people in two focus areas: socially assistive robots used for tutoring or therapy assistants, and physically assistive robots that have arms or manipulation devices that physically move things in the world.”
What unites these two different aspects of robotics? The need to look at people.
“I think human behavior reveals a tremendous amount of information about what’s going on inside peoples’ heads. If we can read things like eye gaze and nonverbal gestures, we can predict what kind of help people need and when they need that help, and what a robot can do to make their lives better.”
Using sensors like cameras, gaze-trackers and even heart-rate monitors, Admoni’s HARP Lab (Human and Robot Partners) aims to create robots that respond to a human’s needs and commands. Or as Admoni puts it, “Like Rosie (from ‘The Jetsons’) but way less capable.” She envisions robots helping children with Autism practice the techniques for mastering facial expressions or making eye contact that they learn in therapy. Admoni also sees a future where robots help people with mobility issues perform their daily tasks — preparing meals, cleaning the house, navigating social situations — without having to rely on someone else.
“My dream is that all the caregivers who come into peoples’ houses and make meals and clean bathrooms could instead use that time to socially interact with their client while a robot chops vegetables or picks up the laundry,” Admoni said.
But why did she choose to form her lab at CMU, when she might have gone elsewhere?
“The ethos of CMU is all about building technology to make the world better,” Admoni said. “I think my assistive robotics fits incredibly well here. CMU has also historically been a place where human-robot interaction was founded. I’m excited to push that forward.
Plus, Admoni admits that being a roboticist at CMU is a little like being a kid at a theme park.
“On my first day at CMU, I saw six robots that I recognized from papers I’d read,” she said. “There was Tank, the roboceptionist. Victor, the Scrabble-playing robot. I saw Herb. I felt like I was at Disney World! I recognized all of these characters that formed the foundation of the field. And they’re HERE. That was really exciting.”
“CMU is a pretty awesome place,” she added. “There are robots everywhere and there are smart people everywhere. I love being immersed in this environment.”