ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.07266
18
0

BETTY Dataset: A Multi-modal Dataset for Full-Stack Autonomy

12 May 2025
Micah Nye
Ayoub Raji
Andrew Saba
Eidan Erlich
Robert Exley
Aragya Goyal
Alexander Matros
Ritesh Misra
Matthew Sivaprakasam
Marko Bertogna
Deva Ramanan
Sebastian A. Scherer
ArXivPDFHTML
Abstract

We present the BETTY dataset, a large-scale, multi-modal dataset collected on several autonomous racing vehicles, targeting supervised and self-supervised state estimation, dynamics modeling, motion forecasting, perception, and more. Existing large-scale datasets, especially autonomous vehicle datasets, focus primarily on supervised perception, planning, and motion forecasting tasks. Our work enables multi-modal, data-driven methods by including all sensor inputs and the outputs from the software stack, along with semantic metadata and ground truth information. The dataset encompasses 4 years of data, currently comprising over 13 hours and 32TB, collected on autonomous racing vehicle platforms. This data spans 6 diverse racing environments, including high-speed oval courses, for single and multi-agent algorithm evaluation in feature-sparse scenarios, as well as high-speed road courses with high longitudinal and lateral accelerations and tight, GPS-denied environments. It captures highly dynamic states, such as 63 m/s crashes, loss of tire traction, and operation at the limit of stability. By offering a large breadth of cross-modal and dynamic data, the BETTY dataset enables the training and testing of full autonomy stack pipelines, pushing the performance of all algorithms to the limits. The current dataset is available atthis https URL.

View on arXiv
@article{nye2025_2505.07266,
  title={ BETTY Dataset: A Multi-modal Dataset for Full-Stack Autonomy },
  author={ Micah Nye and Ayoub Raji and Andrew Saba and Eidan Erlich and Robert Exley and Aragya Goyal and Alexander Matros and Ritesh Misra and Matthew Sivaprakasam and Marko Bertogna and Deva Ramanan and Sebastian Scherer },
  journal={arXiv preprint arXiv:2505.07266},
  year={ 2025 }
}
Comments on this paper