31
6

RedMotion: Motion Prediction via Redundancy Reduction

Abstract

We introduce RedMotion, a transformer model for motion prediction in self-driving vehicles that learns environment representations via redundancy reduction. Our first type of redundancy reduction is induced by an internal transformer decoder and reduces a variable-sized set of local road environment tokens, representing road graphs and agent data, to a fixed-sized global embedding. The second type of redundancy reduction is obtained by self-supervised learning and applies the redundancy reduction principle to embeddings generated from augmented views of road environments. Our experiments reveal that our representation learning approach outperforms PreTraM, Traj-MAE, and GraphDINO in a semi-supervised setting. Moreover, RedMotion achieves competitive results compared to HPTR or MTR++ in the Waymo Motion Prediction Challenge. Our open-source implementation is available at:this https URL

View on arXiv
@article{wagner2025_2306.10840,
  title={ RedMotion: Motion Prediction via Redundancy Reduction },
  author={ Royden Wagner and Omer Sahin Tas and Marvin Klemp and Carlos Fernandez and Christoph Stiller },
  journal={arXiv preprint arXiv:2306.10840},
  year={ 2025 }
}
Comments on this paper