ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.03434
31
0

Procedural Memory Is Not All You Need: Bridging Cognitive Gaps in LLM-Based Agents

6 May 2025
Schaun Wheeler
Olivier Jeunen
    LLMAG
ArXivPDFHTML
Abstract

Large Language Models (LLMs) represent a landmark achievement in Artificial Intelligence (AI), demonstrating unprecedented proficiency in procedural tasks such as text generation, code completion, and conversational coherence. These capabilities stem from their architecture, which mirrors human procedural memory -- the brain's ability to automate repetitive, pattern-driven tasks through practice. However, as LLMs are increasingly deployed in real-world applications, it becomes impossible to ignore their limitations operating in complex, unpredictable environments. This paper argues that LLMs, while transformative, are fundamentally constrained by their reliance on procedural memory. To create agents capable of navigating ``wicked'' learning environments -- where rules shift, feedback is ambiguous, and novelty is the norm -- we must augment LLMs with semantic memory and associative learning systems. By adopting a modular architecture that decouples these cognitive functions, we can bridge the gap between narrow procedural expertise and the adaptive intelligence required for real-world problem-solving.

View on arXiv
@article{wheeler2025_2505.03434,
  title={ Procedural Memory Is Not All You Need: Bridging Cognitive Gaps in LLM-Based Agents },
  author={ Schaun Wheeler and Olivier Jeunen },
  journal={arXiv preprint arXiv:2505.03434},
  year={ 2025 }
}
Comments on this paper