Policy Learning with a Language Bottleneck

Modern AI systems such as self-driving cars and game-playing agents achieve superhuman performance, but often lack human-like generalization, interpretability, and inter-operability with human users. Inspired by the rich interactions between language and decision-making in humans, we introduce Policy Learning with a Language Bottleneck (PLLB), a framework enabling AI agents to generate linguistic rules that capture the high-level strategies underlying rewarding behaviors. PLLB alternates between a *rule generation* step guided by language models, and an *update* step where agents learn new policies guided by rules, even when a rule is insufficient to describe an entire complex policy. Across five diverse tasks, including a two-player signaling game, maze navigation, image reconstruction, and robot grasp planning, we show that PLLB agents are not only able to learn more interpretable and generalizable behaviors, but can also share the learned rules with human users, enabling more effective human-AI coordination. We provide source code for our experiments atthis https URL.
View on arXiv@article{srivastava2025_2405.04118, title={ Policy Learning with a Language Bottleneck }, author={ Megha Srivastava and Cedric Colas and Dorsa Sadigh and Jacob Andreas }, journal={arXiv preprint arXiv:2405.04118}, year={ 2025 } }