ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02964
59
0

FORLA:Federated Object-centric Representation Learning with Slot Attention

3 June 2025
Guiqiu Liao
M. Jogan
Eric Eaton
Daniel A. Hashimoto
    FedML
ArXiv (abs)PDFHTML
Main:9 Pages
6 Figures
Bibliography:4 Pages
11 Tables
Appendix:11 Pages
Abstract

Learning efficient visual representations across heterogeneous unlabeled datasets remains a central challenge in federated learning. Effective federated representations require features that are jointly informative across clients while disentangling domain-specific factors without supervision. We introduce FORLA, a novel framework for federated object-centric representation learning and feature adaptation across clients using unsupervised slot attention. At the core of our method is a shared feature adapter, trained collaboratively across clients to adapt features from foundation models, and a shared slot attention module that learns to reconstruct the adapted features. To optimize this adapter, we design a two-branch student-teacher architecture. In each client, a student decoder learns to reconstruct full features from foundation models, while a teacher decoder reconstructs their adapted, low-dimensional counterpart. The shared slot attention module bridges cross-domain learning by aligning object-level representations across clients. Experiments in multiple real-world datasets show that our framework not only outperforms centralized baselines on object discovery but also learns a compact, universal representation that generalizes well across domains. This work highlights federated slot attention as an effective tool for scalable, unsupervised visual representation learning from cross-domain data with distributed concepts.

View on arXiv
@article{liao2025_2506.02964,
  title={ FORLA:Federated Object-centric Representation Learning with Slot Attention },
  author={ Guiqiu Liao and Matjaz Jogan and Eric Eaton and Daniel A. Hashimoto },
  journal={arXiv preprint arXiv:2506.02964},
  year={ 2025 }
}
Comments on this paper