ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.09284
41
0

UAD: Unsupervised Affordance Distillation for Generalization in Robotic Manipulation

10 June 2025
Yihe Tang
Wenlong Huang
Yingke Wang
Chengshu Li
Roy Yuan
Ruohan Zhang
Jiajun Wu
Li Fei-Fei
ArXiv (abs)PDFHTML
Main:6 Pages
8 Figures
Bibliography:4 Pages
1 Tables
Appendix:4 Pages
Abstract

Understanding fine-grained object affordances is imperative for robots to manipulate objects in unstructured environments given open-ended task instructions. However, existing methods of visual affordance predictions often rely on manually annotated data or conditions only on a predefined set of tasks. We introduce UAD (Unsupervised Affordance Distillation), a method for distilling affordance knowledge from foundation models into a task-conditioned affordance model without any manual annotations. By leveraging the complementary strengths of large vision models and vision-language models, UAD automatically annotates a large-scale dataset with detailed <<<instruction, visual affordance>>> pairs. Training only a lightweight task-conditioned decoder atop frozen features, UAD exhibits notable generalization to in-the-wild robotic scenes and to various human activities, despite only being trained on rendered objects in simulation. Using affordance provided by UAD as the observation space, we show an imitation learning policy that demonstrates promising generalization to unseen object instances, object categories, and even variations in task instructions after training on as few as 10 demonstrations. Project website:this https URL

View on arXiv
@article{tang2025_2506.09284,
  title={ UAD: Unsupervised Affordance Distillation for Generalization in Robotic Manipulation },
  author={ Yihe Tang and Wenlong Huang and Yingke Wang and Chengshu Li and Roy Yuan and Ruohan Zhang and Jiajun Wu and Li Fei-Fei },
  journal={arXiv preprint arXiv:2506.09284},
  year={ 2025 }
}
Comments on this paper