ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2512.04562
72
0
v1v2 (latest)

LeMat-GenBench: A Unified Evaluation Framework for Crystal Generative Models

4 December 2025
Siddharth Betala
Samuel P. Gleason
Ali Ramlaoui
Andy Xu
Georgia Channing
Daniel Levy
Clémentine Fourrier
Nikita Kazeev
Chaitanya K. Joshi
Sékou-Oumar Kaba
Félix Therrien
A. Garcia
Rocío Mercado
N. M. Anoop Krishnan
Alexandre Duval
    ELM
ArXiv (abs)PDFHTMLGithub (24★)
Main:13 Pages
12 Figures
Bibliography:7 Pages
13 Tables
Appendix:26 Pages
Abstract

Generative machine learning (ML) models hold great promise for accelerating materials discovery through the inverse design of inorganic crystals, enabling an unprecedented exploration of chemical space. Yet, the lack of standardized evaluation frameworks makes it challenging to evaluate, compare, and further develop these ML models meaningfully. In this work, we introduce LeMat-GenBench, a unified benchmark for generative models of crystalline materials, supported by a set of evaluation metrics designed to better inform model development and downstream applications. We release both an open-source evaluation suite and a public leaderboard on Hugging Face, and benchmark 12 recent generative models. Results reveal that an increase in stability leads to a decrease in novelty and diversity on average, with no model excelling across all dimensions. Altogether, LeMat-GenBench establishes a reproducible and extensible foundation for fair model comparison and aims to guide the development of more reliable, discovery-oriented generative models for crystalline materials.

View on arXiv
Comments on this paper