ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.06796
56
0

RoboDesign1M: A Large-scale Dataset for Robot Design Understanding

9 March 2025
T. H. Le
T. H. Nguyen
Quang-Dieu Tran
Quang Minh Nguyen
Baoru Huang
Hoan Nguyen
M. Vu
Tung D. Ta
A. Nguyen
    3DV
ArXivPDFHTML
Abstract

Robot design is a complex and time-consuming process that requires specialized expertise. Gaining a deeper understanding of robot design data can enable various applications, including automated design generation, retrieving example designs from text, and developing AI-powered design assistants. While recent advancements in foundation models present promising approaches to addressing these challenges, progress in this field is hindered by the lack of large-scale design datasets. In this paper, we introduce RoboDesign1M, a large-scale dataset comprising 1 million samples. Our dataset features multimodal data collected from scientific literature, covering various robotics domains. We propose a semi-automated data collection pipeline, enabling efficient and diverse data acquisition. To assess the effectiveness of RoboDesign1M, we conduct extensive experiments across multiple tasks, including design image generation, visual question answering about designs, and design image retrieval. The results demonstrate that our dataset serves as a challenging new benchmark for design understanding tasks and has the potential to advance research in this field. RoboDesign1M will be released to support further developments in AI-driven robotic design automation.

View on arXiv
@article{le2025_2503.06796,
  title={ RoboDesign1M: A Large-scale Dataset for Robot Design Understanding },
  author={ Tri Le and Toan Nguyen and Quang Tran and Quang Nguyen and Baoru Huang and Hoan Nguyen and Minh Nhat Vu and Tung D. Ta and Anh Nguyen },
  journal={arXiv preprint arXiv:2503.06796},
  year={ 2025 }
}
Comments on this paper