What Can We Learn from a Million Models?
Abstract: Machine learning has transformed many fields by learning from large collections of data. Yet, it is rarely applied to its own outputs: the models themselves. Today, with millions of publicly available models, a natural question arises: what can we do with so many models? In this talk, I will motivate two core applications that [...]
Erica Weng – PhD Defense Info TBA
More info coming soon
Design Optimization of Modular Manipulators for Manipulation in Cluttered Agricultural Environments
Abstract: Although agriculture is a highly mechanized industry, essential and high-value subsectors such as horticulture and floriculture remain heavily reliant on manual labor because they require complex, contact-rich, and highly selective handling of both plants and produce. The variability and density of tree-canopy clutter further complicate the automation process, making robot performance difficult to quantify [...]
Modeling what Matters: Emergent Abstraction In Reinforcement Learning
Abstract: Real-world decision-making is rife with partial observability, long horizons, and complex multi-agent interactions. This thesis argues that abstraction—forming simplified representations of the task that retain relevant information—offers a unifying principle for tackling these challenges across model-free and model-based reinforcement learning (RL). We develop methods in which abstractions are not hand-designed but emerge from learning objectives, yielding representations that [...]
Should we skip attention?
Abstract: Transformers are ubiquitous. They influence nearly every aspect of modern AI. However, the mechanics of their training remain poorly understood. This poses a problem for the field due to the immense amounts of data, computational power, and energy being invested in the training of these networks. I highlight a recent intriguing empirical result from [...]
Angela Chen – PhD Proposal Info TBA
More info coming soon