Towards fast and generalizable decision making with diffusion models
Abstract:
Many real-world decision-making problems are combinatorial in nature, where states can be seen as a combination of basic elements. Due to combinatorial complexity, observing all combinations of basic elements in the training set is infeasible, which leads to an essential yet understudied problem of zero-shot generalization to states that are unseen combinations of previously seen elements. In this work, we first formalize this problem and then demonstrate how existing value-based reinforcement learning (RL) algorithms struggle due to unreliable value predictions in unseen states. We demonstrate that behavior cloning with a conditioned diffusion model trained on expert trajectories generalizes better to states formed by new
combinations of seen elements than traditional RL methods. Although diffusion models have achieved strong generalization in decision-making tasks, their slow inference speed remains a key limitation. While the consistency model offers a potential solution, its applications to
Many real-world decision-making problems are combinatorial in nature, where states can be seen as a combination of basic elements. Due to combinatorial complexity, observing all combinations of basic elements in the training set is infeasible, which leads to an essential yet understudied problem of zero-shot generalization to states that are unseen combinations of previously seen elements. In this work, we first formalize this problem and then demonstrate how existing value-based reinforcement learning (RL) algorithms struggle due to unreliable value predictions in unseen states. We demonstrate that behavior cloning with a conditioned diffusion model trained on expert trajectories generalizes better to states formed by new
combinations of seen elements than traditional RL methods. Although diffusion models have achieved strong generalization in decision-making tasks, their slow inference speed remains a key limitation. While the consistency model offers a potential solution, its applications to
decision-making often struggle with suboptimal demonstrations or rely on complex concurrent training of multiple networks. In this work, we propose a novel approach to consistency distillation for offline reinforcement learning that directly incorporates reward optimization into the distillation process. Our method enables single-step generation while maintaining higher performance and simpler training.
Together, this thesis presents diffusion models as a fast and generalizable model for decision-making tasks.
Committee:
Prof. Jeff Schneider (advisor)
Prof. Guanya Shi
Mihir Prabhudesai
