ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.20263
87
0

Vector-Quantized Vision Foundation Models for Object-Centric Learning

27 February 2025
Rongzhen Zhao
V. Wang
Juho Kannala
J. Pajarinen
    OCL
    VLM
ArXivPDFHTML
Abstract

Perceiving visual scenes as objects and background -- like humans do -- Object-Centric Learning (OCL) aggregates image or video feature maps into object-level feature vectors, termed \textit{slots}. OCL's self-supervision of reconstructing the input from these aggregated slots struggles with complex object textures, thus Vision Foundation Model (VFM) representations are used as the aggregation input and reconstruction target. However, existing methods leverage VFM representations in diverse ways and often fail to fully exploit their potential. In response, we propose a clean architecture -- Vector-Quantized VFMs for OCL (VQ-VFM-OCL, or VVO) -- that unifies mainstream OCL methods. The key to our unification is simple yet effective, just shared quantizing the same VFM representation as the reconstruction target. Through mathematical modeling and statistical verification, we further analyze why VFM representations facilitate OCL aggregation and how their shared quantization as reconstruction targets strengthens OCL supervision. Experiments show that across different VFMs, aggregators and decoders, our VVO consistently outperforms baselines in object discovery and recognition, as well as downstream visual prediction and reasoning. The source code is available in supplemental files.

View on arXiv
@article{zhao2025_2502.20263,
  title={ Vector-Quantized Vision Foundation Models for Object-Centric Learning },
  author={ Rongzhen Zhao and Vivienne Wang and Juho Kannala and Joni Pajarinen },
  journal={arXiv preprint arXiv:2502.20263},
  year={ 2025 }
}
Comments on this paper