ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.07404
43
0

Towards Safe Robot Foundation Models

10 March 2025
Maximilian Tölle
Theo Gruner
Daniel Palenicek
J. Gunster
Puze Liu
Joe Watson
Davide Tateo
Jan Peters
    OffRL
ArXivPDFHTML
Abstract

Robot foundation models hold the potential for deployment across diverse environments, from industrial applications to household tasks. While current research focuses primarily on the policies' generalization capabilities across a variety of tasks, it fails to address safety, a critical requirement for deployment on real-world systems. In this paper, we introduce a safety layer designed to constrain the action space of any generalist policy appropriately. Our approach uses ATACOM, a safe reinforcement learning algorithm that creates a safe action space and, therefore, ensures safe state transitions. By extending ATACOM to generalist policies, our method facilitates their deployment in safety-critical scenarios without requiring any specific safety fine-tuning. We demonstrate the effectiveness of this safety layer in an air hockey environment, where it prevents a puck-hitting agent from colliding with its surroundings, a failure observed in generalist policies.

View on arXiv
@article{tölle2025_2503.07404,
  title={ Towards Safe Robot Foundation Models },
  author={ Maximilian Tölle and Theo Gruner and Daniel Palenicek and Jonas Günster and Puze Liu and Joe Watson and Davide Tateo and Jan Peters },
  journal={arXiv preprint arXiv:2503.07404},
  year={ 2025 }
}
Comments on this paper