ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.03049
17
1

34 Examples of LLM Applications in Materials Science and Chemistry: Towards Automation, Assistants, Agents, and Accelerated Scientific Discovery

5 May 2025
Yoel Zimmermann
Adib Bazgir
Alexander H Al-Feghali
Mehrad Ansari
L. C. Brinson
Yuan Chiang
Defne Çirci
Min-Hsueh Chiu
Nathan Daelman
Matthew L. Evans
Abhijeet Sadashiv Gangan
Janine George
Hassan Harb
Ghazal Khalighinejad
Sartaaj Khan
Sascha Klawohn
Magdalena Lederbauer
Soroush Mahjoubi
Bernadette Mohr
S. M. Moosavi
Aakash Naik
Aleyna Beste Ozhan
Dieter Plessers
Aritra Roy
Fabian Schöppach
P. Schwaller
Carla Terboven
Katharina Ueltzen
Shang Zhu
Jan Janssen
Calvin Li
Ian T. Foster
B. Blaiszik
ArXivPDFHTML
Abstract

Large Language Models (LLMs) are reshaping many aspects of materials science and chemistry research, enabling advances in molecular property prediction, materials design, scientific automation, knowledge extraction, and more. Recent developments demonstrate that the latest class of models are able to integrate structured and unstructured data, assist in hypothesis generation, and streamline research workflows. To explore the frontier of LLM capabilities across the research lifecycle, we review applications of LLMs through 34 total projects developed during the second annual Large Language Model Hackathon for Applications in Materials Science and Chemistry, a global hybrid event. These projects spanned seven key research areas: (1) molecular and material property prediction, (2) molecular and material design, (3) automation and novel interfaces, (4) scientific communication and education, (5) research data management and automation, (6) hypothesis generation and evaluation, and (7) knowledge extraction and reasoning from the scientific literature. Collectively, these applications illustrate how LLMs serve as versatile predictive models, platforms for rapid prototyping of domain-specific tools, and much more. In particular, improvements in both open source and proprietary LLM performance through the addition of reasoning, additional training data, and new techniques have expanded effectiveness, particularly in low-data environments and interdisciplinary research. As LLMs continue to improve, their integration into scientific workflows presents both new opportunities and new challenges, requiring ongoing exploration, continued refinement, and further research to address reliability, interpretability, and reproducibility.

View on arXiv
@article{zimmermann2025_2505.03049,
  title={ 34 Examples of LLM Applications in Materials Science and Chemistry: Towards Automation, Assistants, Agents, and Accelerated Scientific Discovery },
  author={ Yoel Zimmermann and Adib Bazgir and Alexander Al-Feghali and Mehrad Ansari and L. Catherine Brinson and Yuan Chiang and Defne Circi and Min-Hsueh Chiu and Nathan Daelman and Matthew L. Evans and Abhijeet S. Gangan and Janine George and Hassan Harb and Ghazal Khalighinejad and Sartaaj Takrim Khan and Sascha Klawohn and Magdalena Lederbauer and Soroush Mahjoubi and Bernadette Mohr and Seyed Mohamad Moosavi and Aakash Naik and Aleyna Beste Ozhan and Dieter Plessers and Aritra Roy and Fabian Schöppach and Philippe Schwaller and Carla Terboven and Katharina Ueltzen and Shang Zhu and Jan Janssen and Calvin Li and Ian Foster and Ben Blaiszik },
  journal={arXiv preprint arXiv:2505.03049},
  year={ 2025 }
}
Comments on this paper