218
v1v2 (latest)

Astra: Efficient Transformer Architecture and Contrastive Dynamics Learning for Embodied Instruction Following

Main:9 Pages
16 Figures
Bibliography:4 Pages
7 Tables
Appendix:6 Pages
Abstract

Vision-language-action models have gained significant attention for their ability to model multimodal sequences in embodied instruction following tasks. However, most existing models rely on causal attention, which we find suboptimal for processing sequences composed of interleaved segments from different modalities. In this paper, we introduce Astra, a novel Transformer architecture featuring trajectory attention and learnable action queries, designed to efficiently process segmented multimodal trajectories and predict actions for imitation learning. Furthermore, we propose a contrastive dynamics learning objective to enhance the model's understanding of environment dynamics and multimodal alignment, complementing the primary behavior cloning objective. Through extensive experiments on three large-scale robot manipulation benchmarks, Astra demonstrates substantial performance improvements over previous models.

View on arXiv
Comments on this paper