66
0

MamBEV: Enabling State Space Models to Learn Birds-Eye-View Representations

Abstract

3D visual perception tasks, such as 3D detection from multi-camera images, are essential components of autonomous driving and assistance systems. However, designing computationally efficient methods remains a significant challenge. In this paper, we propose a Mamba-based framework called MamBEV, which learns unified Bird's Eye View (BEV) representations using linear spatio-temporal SSM-based attention. This approach supports multiple 3D perception tasks with significantly improved computational and memory efficiency. Furthermore, we introduce SSM based cross-attention, analogous to standard cross attention, where BEV query representations can interact with relevant image features. Extensive experiments demonstrate MamBEV's promising performance across diverse visual perception metrics, highlighting its advantages in input scaling efficiency compared to existing benchmark models.

View on arXiv
@article{ke2025_2503.13858,
  title={ MamBEV: Enabling State Space Models to Learn Birds-Eye-View Representations },
  author={ Hongyu Ke and Jack Morris and Kentaro Oguchi and Xiaofei Cao and Yongkang Liu and Haoxin Wang and Yi Ding },
  journal={arXiv preprint arXiv:2503.13858},
  year={ 2025 }
}
Comments on this paper