ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.22391
48
4

A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks

21 February 2025
Thomas Schmied
Thomas Adler
Vihang Patil
M. Beck
Korbinian Poppel
Johannes Brandstetter
G. Klambauer
Razvan Pascanu
Sepp Hochreiter
ArXivPDFHTML
Abstract

In recent years, there has been a trend in the field of Reinforcement Learning (RL) towards large action models trained offline on large-scale datasets via sequence modeling. Existing models are primarily based on the Transformer architecture, which result in powerful agents. However, due to slow inference times, Transformer-based approaches are impractical for real-time applications, such as robotics. Recently, modern recurrent architectures, such as xLSTM and Mamba, have been proposed that exhibit parallelization benefits during training similar to the Transformer architecture while offering fast inference. In this work, we study the aptitude of these modern recurrent architectures for large action models. Consequently, we propose a Large Recurrent Action Model (LRAM) with an xLSTM at its core that comes with linear-time inference complexity and natural sequence length extrapolation abilities. Experiments on 432 tasks from 6 domains show that LRAM compares favorably to Transformers in terms of performance and speed.

View on arXiv
@article{schmied2025_2410.22391,
  title={ A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks },
  author={ Thomas Schmied and Thomas Adler and Vihang Patil and Maximilian Beck and Korbinian Pöppel and Johannes Brandstetter and Günter Klambauer and Razvan Pascanu and Sepp Hochreiter },
  journal={arXiv preprint arXiv:2410.22391},
  year={ 2025 }
}
Comments on this paper