ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.06121
  4. Cited By
Transformer-Based Model for Monocular Visual Odometry: A Video Understanding Approach

Transformer-Based Model for Monocular Visual Odometry: A Video Understanding Approach

10 May 2023
André O. Françani
Marcos R. O. A. Máximo
ArXivPDFHTML

Papers citing "Transformer-Based Model for Monocular Visual Odometry: A Video Understanding Approach"

4 / 4 papers shown
Title
Scene-agnostic Pose Regression for Visual Localization
Scene-agnostic Pose Regression for Visual Localization
Junwei Zheng
Ruiping Liu
Y. Chen
Zhenfang Chen
Kailun Yang
Jiaming Zhang
Rainer Stiefelhagen
39
0
0
25 Mar 2025
Motion Consistency Loss for Monocular Visual Odometry with
  Attention-Based Deep Learning
Motion Consistency Loss for Monocular Visual Odometry with Attention-Based Deep Learning
André O. Françani
Marcos R. O. A. Máximo
17
0
0
19 Jan 2024
Is Space-Time Attention All You Need for Video Understanding?
Is Space-Time Attention All You Need for Video Understanding?
Gedas Bertasius
Heng Wang
Lorenzo Torresani
ViT
278
1,939
0
09 Feb 2021
ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D
  Cameras
ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras
Raul Mur-Artal
Juan D. Tardós
194
5,352
0
20 Oct 2016
1