9
0

Generative Reliability-Based Design Optimization Using In-Context Learning Capabilities of Large Language Models

Abstract

Large Language Models (LLMs) have demonstrated remarkable in-context learning capabilities, enabling flexible utilization of limited historical information to play pivotal roles in reasoning, problem-solving, and complex pattern recognition tasks. Inspired by the successful applications of LLMs in multiple domains, this paper proposes a generative design method by leveraging the in-context learning capabilities of LLMs with the iterative search mechanisms of metaheuristic algorithms for solving reliability-based design optimization problems. In detail, reliability analysis is performed by engaging the LLMs and Kriging surrogate modeling to overcome the computational burden. By dynamically providing critical information of design points to the LLMs with prompt engineering, the method enables rapid generation of high-quality design alternatives that satisfy reliability constraints while achieving performance optimization. With the Deepseek-V3 model, three case studies are used to demonstrated the performance of the proposed approach. Experimental results indicate that the proposed LLM-RBDO method successfully identifies feasible solutions that meet reliability constraints while achieving a comparable convergence rate compared to traditional genetic algorithms.

View on arXiv
@article{jiang2025_2503.22401,
  title={ Generative Reliability-Based Design Optimization Using In-Context Learning Capabilities of Large Language Models },
  author={ Zhonglin Jiang and Qian Tang and Zequn Wang },
  journal={arXiv preprint arXiv:2503.22401},
  year={ 2025 }
}
Comments on this paper