ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.07396
53
0

Brain Inspired Adaptive Memory Dual-Net for Few-Shot Image Classification

10 March 2025
Kexin Di
Xiuxing Li
Yuyang Han
Ziyu Li
Qing Li
Xia Wu
    VLM
ArXivPDFHTML
Abstract

Few-shot image classification has become a popular research topic for its wide application in real-world scenarios, however the problem of supervision collapse induced by single image-level annotation remains a major challenge. Existing methods aim to tackle this problem by locating and aligning relevant local features. However, the high intra-class variability in real-world images poses significant challenges in locating semantically relevant local regions under few-shot settings. Drawing inspiration from the human's complementary learning system, which excels at rapidly capturing and integrating semantic features from limited examples, we propose the generalization-optimized Systems Consolidation Adaptive Memory Dual-Network, SCAM-Net. This approach simulates the systems consolidation of complementary learning system with an adaptive memory module, which successfully addresses the difficulty of identifying meaningful features in few-shot scenarios. Specifically, we construct a Hippocampus-Neocortex dual-network that consolidates structured representation of each category, the structured representation is then stored and adaptively regulated following the generalization optimization principle in a long-term memory inside Neocortex. Extensive experiments on benchmark datasets show that the proposed model has achieved state-of-the-art performance.

View on arXiv
@article{di2025_2503.07396,
  title={ Brain Inspired Adaptive Memory Dual-Net for Few-Shot Image Classification },
  author={ Kexin Di and Xiuxing Li and Yuyang Han and Ziyu Li and Qing Li and Xia Wu },
  journal={arXiv preprint arXiv:2503.07396},
  year={ 2025 }
}
Comments on this paper