21
0

Hierarchical Attention Generates Better Proofs

Abstract

Large language models (LLMs) have shown promise in formal theorem proving, but their token-level processing often fails to capture the inherent hierarchical nature of mathematical proofs. We introduce \textbf{Hierarchical Attention}, a regularization method that aligns LLMs' attention mechanisms with mathematical reasoning structures. Our approach establishes a five-level hierarchy from foundational elements to high-level concepts, ensuring structured information flow in proof generation. Experiments demonstrate that our method improves proof success rates by 2.05\% on miniF2F and 1.69\% on ProofNet while reducing proof complexity by 23.81\% and 16.50\% respectively. The code is available atthis https URL.

View on arXiv
@article{chen2025_2504.19188,
  title={ Hierarchical Attention Generates Better Proofs },
  author={ Jianlong Chen and Chao Li and Yang Yuan and Andrew C Yao },
  journal={arXiv preprint arXiv:2504.19188},
  year={ 2025 }
}
Comments on this paper