30
3

Risks of Practicing Large Language Models in Smart Grid: Threat Modeling and Validation

Abstract

Large language models (LLMs) represent significant breakthroughs in artificial intelligence and hold potential for applications within smart grids. However, as demonstrated in previous literature, AI technologies are susceptible to various types of attacks. It is crucial to investigate and evaluate the risks associated with LLMs before deploying them in critical infrastructure like smart grids. In this paper, we systematically evaluated the risks of LLMs and identified two major types of attacks relevant to potential smart grid LLM applications, presenting the corresponding threat models. We validated these attacks using popular LLMs and real smart grid data. Our validation demonstrates that attackers are capable of injecting bad data and retrieving domain knowledge from LLMs employed in different smart grid applications.

View on arXiv
@article{li2025_2405.06237,
  title={ Risks of Practicing Large Language Models in Smart Grid: Threat Modeling and Validation },
  author={ Jiangnan Li and Yingyuan Yang and Jinyuan Sun },
  journal={arXiv preprint arXiv:2405.06237},
  year={ 2025 }
}
Comments on this paper