Decision Information Meets Large Language Models: The Future of Explainable Operations Research

Operations Research (OR) is vital for decision-making in many industries. While recent OR methods have seen significant improvements in automation and efficiency through integrating Large Language Models (LLMs), they still struggle to produce meaningful explanations. This lack of clarity raises concerns about transparency and trustworthiness in OR applications. To address these challenges, we propose a comprehensive framework, Explainable Operations Research (EOR), emphasizing actionable and understandable explanations accompanying optimization. The core of EOR is the concept of Decision Information, which emerges from what-if analysis and focuses on evaluating the impact of complex constraints (or parameters) changes on decision-making. Specifically, we utilize bipartite graphs to quantify the changes in the OR model and adopt LLMs to improve the explanation capabilities. Additionally, we introduce the first industrial benchmark to rigorously evaluate the effectiveness of explanations and analyses in OR, establishing a new standard for transparency and clarity in the field.
View on arXiv@article{zhang2025_2502.09994, title={ Decision Information Meets Large Language Models: The Future of Explainable Operations Research }, author={ Yansen Zhang and Qingcan Kang and Wing Yin Yu and Hailei Gong and Xiaojin Fu and Xiongwei Han and Tao Zhong and Chen Ma }, journal={arXiv preprint arXiv:2502.09994}, year={ 2025 } }