ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.02562
52
1

Learning the RoPEs: Better 2D and 3D Position Encodings with STRING

4 February 2025
Connor Schenck
Isaac Reid
M. Jacob
Alex Bewley
Joshua Ainslie
David Rendleman
Deepali Jain
Mohit Sharma
Avinava Dubey
Ayzaan Wahid
Sumeet Singh
René Wagner
Tianli Ding
Chuyuan Fu
Arunkumar Byravan
Jake Varley
A. Gritsenko
Matthias Minderer
Dmitry Kalashnikov
Jonathan Tompson
Vikas Sindhwani
Krzysztof Choromanski
ArXivPDFHTML
Abstract

We introduce STRING: Separable Translationally Invariant Position Encodings. STRING extends Rotary Position Encodings, a recently proposed and widely used algorithm in large language models, via a unifying theoretical framework. Importantly, STRING still provides exact translation invariance, including token coordinates of arbitrary dimensionality, whilst maintaining a low computational footprint. These properties are especially important in robotics, where efficient 3D token representation is key. We integrate STRING into Vision Transformers with RGB(-D) inputs (color plus optional depth), showing substantial gains, e.g. in open-vocabulary object detection and for robotics controllers. We complement our experiments with a rigorous mathematical analysis, proving the universality of our methods.

View on arXiv
@article{schenck2025_2502.02562,
  title={ Learning the RoPEs: Better 2D and 3D Position Encodings with STRING },
  author={ Connor Schenck and Isaac Reid and Mithun George Jacob and Alex Bewley and Joshua Ainslie and David Rendleman and Deepali Jain and Mohit Sharma and Avinava Dubey and Ayzaan Wahid and Sumeet Singh and Rene Wagner and Tianli Ding and Chuyuan Fu and Arunkumar Byravan and Jake Varley and Alexey Gritsenko and Matthias Minderer and Dmitry Kalashnikov and Jonathan Tompson and Vikas Sindhwani and Krzysztof Choromanski },
  journal={arXiv preprint arXiv:2502.02562},
  year={ 2025 }
}
Comments on this paper