CMU Study Shows How Blind People Share Control With Robots in Real-World Scenarios
The Breakdown:
- A CMU study followed blind participants navigating a museum with a robotic guide.
- It revealed that navigation strategies shift from moment to moment in unpredictable environments.
- The findings highlight the need to design assistive technologies that adapt to users’ changing decisions in real time.
* * *
Assistive navigation robots are only as effective as the choices users make with them. A new study from Carnegie Mellon University’s School of Computer Science reveals that people who are blind or have low vision continually choose when to follow a robot’s guidance and when to act on their own — an insight crucial for designing more effective, user-centered assistive technologies in the future.
The study followed six blind participants over three weeks as they traversed Tokyo’s National Museum of Emerging Science and Innovation, known colloquially as the Miraikan. The participants encountered crowded spaces, blocked pathways and unexpected obstacles that required them to choose between acting on their own or relying on the robot.
“Our findings challenge a common assumption in robotics: that more automation is always better,” said Chieko Asakawa, chief executive director at the Miraikan and a courtesy faculty member in the Robotics Institute (RI). “In reality, effective systems should prioritize flexibility and user control. It’s easy to design a robot that does everything and assume that’s what people want. But many users want the ability to choose.”
To explore these choices, the research team built on an open-source robotic guide system, equipping a suitcase-style robot with cameras to detect obstacles and navigate indoor environments. Participants interacted with the robot through a handle with controls and buttons and received audio feedback through a wearable speaker.
The system combined informational and interactive elements to help users engage with their surroundings in multiple ways. The audio feedback, called “Surrounding GPT,” allowed the participants to actively request a description of their surroundings by pressing a button on top of the handle. The audio outputs included details about visible objects, people, spatial layout and an overall sense of what was happening in the environment –– going well beyond earlier systems that only delivered automatic alerts about obstacles and supporting the goal of providing broader situational awareness.
The robot also supported more direct forms of assistance. A button on the right enabled participants to delegate social interaction by having the robot say “Excuse me, please move,” while a left button prompted the robot to vocalize “Excuse me, please help me.” Together with Surrounding GPT, these functions gave participants multiple ways to manage social interactions, supporting both independent action and the ability to assign tasks to the robot.
The team’s use of a prolonged study in the real world allowed participants to experience the system over time. This approach helped avoid what researchers call the “novelty effect,” where first impressions — whether excitement or skepticism — can distort how people actually use a robot in daily life. The results show there is no one-size-fits-all approach. Some participants preferred to take direct action by navigating around obstacles, investigating their surroundings or addressing situations themselves. Others chose to rely more on the robot, especially in crowded or noisy environments. In many cases, participants shifted between these approaches, depending on the scenario.
“In a crowded space with no clear path forward, a person can try to handle the situation themselves or let the assistive robot step in,” said RI Ph.D. student Rayna Hata. “What we found is that decisions are rarely fixed. Choices vary not just from person to person, but from moment to moment — and when designing assistive technology, it’s critical to understand how and why users make decisions.”
Along with Hata and Asakawa, the research team included Masaki Kuribayashi and Allan Wang from the National Museum of Emerging Science and Innovation Japan, and Hironobu Takagi from IBM Research in Tokyo.
The team’s work was accepted at April’s Association for Computing Machinery Conference on Human Factors in Computing Systems (CHI 2026). Read the paper to learn more about the project.
For More Information: Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu
