ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.26645
205
11
v1v2v3 (latest)

TTT3R: 3D Reconstruction as Test-Time Training

30 September 2025
Xingyu Chen
Yue Chen
Yuliang Xiu
Andreas Geiger
Anpei Chen
    3DV
ArXiv (abs)PDFHTMLHuggingFace (10 upvotes)
Main:9 Pages
19 Figures
Bibliography:6 Pages
2 Tables
Appendix:7 Pages
Abstract

Modern Recurrent Neural Networks have become a competitive architecture for 3D reconstruction due to their linear-time complexity. However, their performance degrades significantly when applied beyond the training context length, revealing limited length generalization. In this work, we revisit the 3D reconstruction foundation models from a Test-Time Training perspective, framing their designs as an online learning problem. Building on this perspective, we leverage the alignment confidence between the memory state and incoming observations to derive a closed-form learning rate for memory updates, to balance between retaining historical information and adapting to new observations. This training-free intervention, termed TTT3R, substantially improves length generalization, achieving a 2×2\times2× improvement in global pose estimation over baselines, while operating at 20 FPS with just 6 GB of GPU memory to process thousands of images. Code available inthis https URL

View on arXiv
Comments on this paper