ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.14912
53
0

Universal Semantic Embeddings of Chemical Elements for Enhanced Materials Inference and Discovery

24 February 2025
Yunze Jia
Yuehui Xian
Yangyang Xu
Pengfei Dang
Xiangdong Ding
Jun Sun
Yumei Zhou
Dezhen Xue
ArXivPDFHTML
Abstract

We present a framework for generating universal semantic embeddings of chemical elements to advance materials inference and discovery. This framework leverages ElementBERT, a domain-specific BERT-based natural language processing model trained on 1.29 million abstracts of alloy-related scientific papers, to capture latent knowledge and contextual relationships specific to alloys. These semantic embeddings serve as robust elemental descriptors, consistently outperforming traditional empirical descriptors with significant improvements across multiple downstream tasks. These include predicting mechanical and transformation properties, classifying phase structures, and optimizing materials properties via Bayesian optimization. Applications to titanium alloys, high-entropy alloys, and shape memory alloys demonstrate up to 23% gains in prediction accuracy. Our results show that ElementBERT surpasses general-purpose BERT variants by encoding specialized alloy knowledge. By bridging contextual insights from scientific literature with quantitative inference, our framework accelerates the discovery and optimization of advanced materials, with potential applications extending beyond alloys to other material classes.

View on arXiv
@article{jia2025_2502.14912,
  title={ Universal Semantic Embeddings of Chemical Elements for Enhanced Materials Inference and Discovery },
  author={ Yunze Jia and Yuehui Xian and Yangyang Xu and Pengfei Dang and Xiangdong Ding and Jun Sun and Yumei Zhou and Dezhen Xue },
  journal={arXiv preprint arXiv:2502.14912},
  year={ 2025 }
}
Comments on this paper