29
0

Attend or Perish: Benchmarking Attention in Algorithmic Reasoning

Abstract

Can transformers learn to perform algorithmic tasks reliably across previously unseen input/output domains? While pre-trained language models show solid accuracy on benchmarks incorporating algorithmic reasoning, assessing the reliability of these results necessitates an ability to cleanse models' functional capabilities from memorization. In this paper, we propose an algorithmic benchmark comprising six tasks of infinite input domains where we can also disentangle and trace the correct, robust algorithm necessary for the task. This allows us to assess (i) models' ability to extrapolate to unseen types of inputs, including new lengths, value ranges or input domains, but also (ii) to assess the robustness of the functional mechanism in recent models through the lens of their attention maps. We make the implementation of all our tasks and interoperability methods publicly available atthis https URL.

View on arXiv
@article{spiegel2025_2503.01909,
  title={ Attend or Perish: Benchmarking Attention in Algorithmic Reasoning },
  author={ Michal Spiegel and Michal Štefánik and Marek Kadlčík and Josef Kuchař },
  journal={arXiv preprint arXiv:2503.01909},
  year={ 2025 }
}
Comments on this paper