ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.13078
45
78

L4P: Low-Level 4D Vision Perception Unified

18 February 2025
Abhishek Badki
Hang Su
Bowen Wen
Orazio Gallo
    VLM
ArXivPDFHTML
Abstract

The spatio-temporal relationship between the pixels of a video carries critical information for low-level 4D perception tasks. A single model that reasons about it should be able to solve several such tasks well. Yet, most state-of-the-art methods rely on architectures specialized for the task at hand. We present L4P, a feedforward, general-purpose architecture that solves low-level 4D perception tasks in a unified framework. L4P leverages a pre-trained ViT-based video encoder and combines it with per-task heads that are lightweight and therefore do not require extensive training. Despite its general and feedforward formulation, our method matches or surpasses the performance of existing specialized methods on both dense tasks, such as depth or optical flow estimation, and sparse tasks, such as 2D/3D tracking. Moreover, it solves all tasks at once in a time comparable to that of single-task methods.

View on arXiv
@article{badki2025_2502.13078,
  title={ L4P: Low-Level 4D Vision Perception Unified },
  author={ Abhishek Badki and Hang Su and Bowen Wen and Orazio Gallo },
  journal={arXiv preprint arXiv:2502.13078},
  year={ 2025 }
}
Comments on this paper