ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.11743
21
0

PARMESAN: Parameter-Free Memory Search and Transduction for Dense Prediction Tasks

18 March 2024
Philip Matthias Winter
M. Wimmer
David Major
Dimitrios Lenis
Astrid Berg
Theresa Neubauer
Gaia Romana De Paolis
Johannes Novotny
Sophia Ulonska
Katja Bühler
ArXivPDFHTML
Abstract

This work addresses flexibility in deep learning by means of transductive reasoning. For adaptation to new data and tasks, e.g., in continual learning, existing methods typically involve tuning learnable parameters or complete re-training from scratch, rendering such approaches unflexible in practice. We argue that the notion of separating computation from memory by the means of transduction can act as a stepping stone for solving these issues. We therefore propose PARMESAN (parameter-free memory search and transduction), a scalable method which leverages a memory module for solving dense prediction tasks. At inference, hidden representations in memory are being searched to find corresponding patterns. In contrast to other methods that rely on continuous training of learnable parameters, PARMESAN learns via memory consolidation simply by modifying stored contents. Our method is compatible with commonly used architectures and canonically transfers to 1D, 2D, and 3D grid-based data. The capabilities of our approach are demonstrated at the complex task of continual learning. PARMESAN learns by 3-4 orders of magnitude faster than established baselines while being on par in terms of predictive performance, hardware-efficiency, and knowledge retention.

View on arXiv
@article{winter2025_2403.11743,
  title={ PARMESAN: Parameter-Free Memory Search and Transduction for Dense Prediction Tasks },
  author={ Philip Matthias Winter and Maria Wimmer and David Major and Dimitrios Lenis and Astrid Berg and Theresa Neubauer and Gaia Romana De Paolis and Johannes Novotny and Sophia Ulonska and Katja Bühler },
  journal={arXiv preprint arXiv:2403.11743},
  year={ 2025 }
}
Comments on this paper