40
0

Improving Rule-based Reasoning in LLMs via Neurosymbolic Representations

Abstract

Large language models (LLMs) continue to face challenges in reliably solving reasoning tasks, particularly tasks that involve precise rule following, as often found in mathematical reasoning tasks. This paper introduces a novel neurosymbolic method that improves LLM reasoning by encoding hidden states into neurosymbolic vectors, allowing for problem-solving within a neurosymbolic vector space. The results are decoded and combined with the original hidden state, boosting the model's performance on numerical reasoning tasks. By offloading computation through neurosymbolic representations, this method improves efficiency, reliability, and interpretability. Our experimental results demonstrate an average of 82.86%82.86\% lower cross entropy loss and 24.5024.50 times more problems correctly solved on a suite of mathematical reasoning problems compared to chain-of-thought prompting and supervised fine-tuning (LoRA), while at the same time not hindering the performance of the LLM on other tasks.

View on arXiv
@article{dhanraj2025_2502.01657,
  title={ Improving Rule-based Reasoning in LLMs via Neurosymbolic Representations },
  author={ Varun Dhanraj and Chris Eliasmith },
  journal={arXiv preprint arXiv:2502.01657},
  year={ 2025 }
}
Comments on this paper