10
0

Cognitive Load-Aware Inference: A Neuro-Symbolic Framework for Optimizing the Token Economy of Large Language Models

Yilun Zhang
Main:16 Pages
Bibliography:3 Pages
3 Tables
Appendix:4 Pages
Abstract

The escalating computational costs of Large Language Model (LLM) inference have become a critical barrier to their widespread and sustainable deployment. While existing optimization strategies are effective, they are predominantly based on statistical heuristics or architectural modifications, lacking a guiding cognitive theory to manage the inference process itself. This paper aims to bridge this gap by introducing a novel paradigm: the Cognitive Load-Aware Inference (CLAI) framework, which operationalizes principles from Cognitive Load Theory (CLT) and neuroscience for LLM inference. We formalize the concepts of Intrinsic Cognitive Load, Extraneous Cognitive Load, and Germane Cognitive Load into quantifiable LLM metrics (ICLLLMICL_{LLM}, ECLLLMECL_{LLM}, and GCLLLMGCL_{LLM}), thereby reframing the inference process as a cognitive economics optimization problem: based on the intrinsic complexity of a problem (ICLLLMICL_{LLM}), minimize wasteful computation (ECLLLMECL_{LLM}), and strategically allocate the token budget to productive reasoning (GCLLLMGCL_{LLM}). We propose two implementation paths: CLAI-Prompt, a zero-shot method that guides a base LLM through cognitive control steps via a structured meta-prompt, and CLAI-Tune, a fine-tuned model that internalizes these principles for spontaneous cognitive economy. Across a range of benchmarks in complex reasoning, long-context question answering, and code generation, our methods achieve significant reductions in token consumption (up to 45\%) without sacrificing accuracy. Furthermore, CLAI-Tune exhibits an emergent ability to autonomously decompose difficult problems, a key characteristic of human expert cognition. This work demonstrates that by emulating the brain's resource management strategies, we can build more efficient, robust, and capable artificial intelligence systems.

View on arXiv
@article{zhang2025_2507.00653,
  title={ Cognitive Load-Aware Inference: A Neuro-Symbolic Framework for Optimizing the Token Economy of Large Language Models },
  author={ Yilun Zhang },
  journal={arXiv preprint arXiv:2507.00653},
  year={ 2025 }
}
Comments on this paper