Text-to-Decision Agent: Learning Generalist Policies from Natural Language Supervision

RL systems usually tackle generalization by inferring task beliefs from high-quality samples or warmup explorations. The restricted form limits their generality and usability since these supervision signals are expensive and even infeasible to acquire in advance for unseen tasks. Learning directly from the raw text about decision tasks is a promising alternative to leverage a much broader source of supervision. In the paper, we propose Text-to-Decision Agent (T2DA), a simple and scalable framework that supervises generalist policy learning with natural language. We first introduce a generalized world model to encode multi-task decision data into a dynamics-aware embedding space. Then, inspired by CLIP, we predict which textual description goes with which decision embedding, effectively bridging their semantic gap via contrastive language-decision pre-training and aligning the text embeddings to comprehend the environment dynamics. After training the text-conditioned generalist policy, the agent can directly realize zero-shot text-to-decision generation in response to language instructions. Comprehensive experiments on MuJoCo and Meta-World benchmarks show that T2DA facilitates high-capacity zero-shot generalization and outperforms various types of baselines.
View on arXiv@article{zhang2025_2504.15046, title={ Text-to-Decision Agent: Learning Generalist Policies from Natural Language Supervision }, author={ Shilin Zhang and Zican Hu and Wenhao Wu and Xinyi Xie and Jianxiang Tang and Chunlin Chen and Daoyi Dong and Yu Cheng and Zhenhong Sun and Zhi Wang }, journal={arXiv preprint arXiv:2504.15046}, year={ 2025 } }