Enhancing Safety and Performance in Multi-agent Systems and Soft Robots via Reinforcement Learning - Robotics Institute Carnegie Mellon University

Enhancing Safety and Performance in Multi-agent Systems and Soft Robots via Reinforcement Learning

Master's Thesis, Tech. Report, CMU-RI-TR-25-70, August, 2025

Abstract

With the growing deployment of robots across various safety-critical applications, ensuring safety has become increasingly essential to prevent accidents and enhance reliability. Addressing safety in robotic systems presents several inherent challenges.

Firstly, in multi-agent settings, incorporating safety filters can adversely impact overall task performance, as agents must simultaneously ensure collision avoidance and timely task completion. To mitigate this trade-off, we propose a co-safe Reinforcement Learning (RL) framework wherein the ego agent utilizes the neighbouring agents’ safety model information to better inform its policy, thereby minimally influencing other agents’ trajectories, resulting in improved task performance on the defined metrics.

Secondly, ensuring safety in soft robots is particularly challenging due to their inherently complex and highly nonlinear dynamics, which limit the effectiveness of traditional model-based safety filters. To address this, a model-free safety filter based on Q-learning is introduced, designed to integrate seamlessly with standard reinforcement learning frameworks. The practical viability and robustness of the proposed safety filter is demonstrated through simulation studies and real-world validations using a soft robotic limb actuated by shape memory alloys.

BibTeX

@mastersthesis{Choudhary-2025-148168,
author = {Yogita Choudhary},
title = {Enhancing Safety and Performance in Multi-agent Systems and Soft Robots via Reinforcement Learning},
year = {2025},
month = {August},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-25-70},
keywords = {Robot Learning, Multi-agent Systems},
}