ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12035
45
0

MOS: Modeling Object-Scene Associations in Generalized Category Discovery

15 March 2025
Zhengyuan Peng
Jinpeng Ma
Zhimin Sun
Ran Yi
Haichuan Song
Xin Tan
Lizhuang Ma
ArXivPDFHTML
Abstract

Generalized Category Discovery (GCD) is a classification task that aims to classify both base and novel classes in unlabeled images, using knowledge from a labeled dataset. In GCD, previous research overlooks scene information or treats it as noise, reducing its impact during model training. However, in this paper, we argue that scene information should be viewed as a strong prior for inferring novel classes. We attribute the misinterpretation of scene information to a key factor: the Ambiguity Challenge inherent in GCD. Specifically, novel objects in base scenes might be wrongly classified into base categories, while base objects in novel scenes might be mistakenly recognized as novel categories. Once the ambiguity challenge is addressed, scene information can reach its full potential, significantly enhancing the performance of GCD models. To more effectively leverage scene information, we propose the Modeling Object-Scene Associations (MOS) framework, which utilizes a simple MLP-based scene-awareness module to enhance GCD performance. It achieves an exceptional average accuracy improvement of 4% on the challenging fine-grained datasets compared to state-of-the-art methods, emphasizing its superior performance in fine-grained GCD. The code is publicly available atthis https URL

View on arXiv
@article{peng2025_2503.12035,
  title={ MOS: Modeling Object-Scene Associations in Generalized Category Discovery },
  author={ Zhengyuan Peng and Jinpeng Ma and Zhimin Sun and Ran Yi and Haichuan Song and Xin Tan and Lizhuang Ma },
  journal={arXiv preprint arXiv:2503.12035},
  year={ 2025 }
}
Comments on this paper