ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.10409
28
0

GPS: Distilling Compact Memories via Grid-based Patch Sampling for Efficient Online Class-Incremental Learning

14 April 2025
Mingchuan Ma
Yuhao Zhou
Jindi Lv
Yuxin Tian
Dan Si
Shujian Li
Qing Ye
Jiancheng Lv
    CLL
ArXivPDFHTML
Abstract

Online class-incremental learning aims to enable models to continuously adapt to new classes with limited access to past data, while mitigating catastrophic forgetting. Replay-based methods address this by maintaining a small memory buffer of previous samples, achieving competitive performance. For effective replay under constrained storage, recent approaches leverage distilled data to enhance the informativeness of memory. However, such approaches often involve significant computational overhead due to the use of bi-level optimization. Motivated by these limitations, we introduce Grid-based Patch Sampling (GPS), a lightweight and effective strategy for distilling informative memory samples without relying on a trainable model. GPS generates informative samples by sampling a subset of pixels from the original image, yielding compact low-resolution representations that preserve both semantic content and structural information. During replay, these representations are reassembled to support training and evaluation. Experiments on extensive benchmarks demonstrate that GRS can be seamlessly integrated into existing replay frameworks, leading to 3%-4% improvements in average end accuracy under memory-constrained settings, with limited computational overhead.

View on arXiv
@article{ma2025_2504.10409,
  title={ GPS: Distilling Compact Memories via Grid-based Patch Sampling for Efficient Online Class-Incremental Learning },
  author={ Mingchuan Ma and Yuhao Zhou and Jindi Lv and Yuxin Tian and Dan Si and Shujian Li and Qing Ye and Jiancheng Lv },
  journal={arXiv preprint arXiv:2504.10409},
  year={ 2025 }
}
Comments on this paper