ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.15795
36
0

Lean-ing on Quality: How High-Quality Data Beats Diverse Multilingual Data in AutoFormalization

18 February 2025
Willy Chan
Michael Souliman
Jakob Nordhagen
Brando Miranda
Elyas Obbad
Kai Fronsdal Sanmi Koyejo
ArXivPDFHTML
Abstract

Autoformalization, the process of transforming informal mathematical language into formal specifications and proofs remains a difficult task for state-of-the-art (large) language models. Existing works point to competing explanations for the performance gap. To this end, we introduce a novel methodology that leverages back-translation with hand-curated prompts to enhance the mathematical capabilities of language models, particularly addressing the challenge posed by the scarcity of labeled data. Specifically, we evaluate three primary variations of this strategy: (1) on-the-fly (online) backtranslation, (2) distilled (offline) backtranslation with few-shot amplification, and (3) line-by-line proof analysis integrated with proof state information. Each variant is designed to optimize data quality over quantity, focusing on the high fidelity of generated proofs rather than sheer data scale. Our findings provide evidence that employing our proposed approaches to generate synthetic data, which prioritizes quality over volume, improves the Autoformalization performance of LLMs as measured by standard benchmarks such as ProofNet. Crucially, our approach outperforms pretrained models using a minimal number of tokens. We also show, through strategic prompting and backtranslation, that our approaches surpass the performance of fine-tuning with extensive multilingual datasets such as MMA on ProofNet with only 1/150th of the tokens. Taken together, our methods show a promising new approach to significantly reduce the resources required to formalize proofs, thereby accelerating AI for math.

View on arXiv
@article{chan2025_2502.15795,
  title={ Lean-ing on Quality: How High-Quality Data Beats Diverse Multilingual Data in AutoFormalization },
  author={ Willy Chan and Michael Souliman and Jakob Nordhagen and Brando Miranda and Elyas Obbad and Kai Fronsdal Sanmi Koyejo },
  journal={arXiv preprint arXiv:2502.15795},
  year={ 2025 }
}
Comments on this paper