GrainPaint: A multi-scale diffusion-based generative model for microstructure reconstruction of large-scale objects
Abstract
Simulation-based approaches to microstructure generation can suffer from a variety of limitations, such as high memory usage, long computational times, and difficulties in generating complex geometries. Generative machine learning models present a way around these issues, but they have previously been limited by the fixed size of their generation area. We present a new microstructure generation methodology leveraging advances in inpainting using denoising diffusion models to overcome this generation area limitation. We show that microstructures generated with the presented methodology are statistically similar to grain structures generated with a kinetic Monte Carlo simulator, SPPARKS.
View on arXiv@article{hoffman2025_2503.04776, title={ GrainPaint: A multi-scale diffusion-based generative model for microstructure reconstruction of large-scale objects }, author={ Nathan Hoffman and Cashen Diniz and Dehao Liu and Theron Rodgers and Anh Tran and Mark Fuge }, journal={arXiv preprint arXiv:2503.04776}, year={ 2025 } }
Comments on this paper