Mitigating Adversarial Effects of False Data Injection Attacks in Power Grid

Deep Neural Networks have proven to be highly accurate at a variety of tasks in recent years. The benefits of Deep Neural Networks have also been embraced in power grids to detect False Data Injection Attacks (FDIA) while conducting critical tasks like state estimation. However, the vulnerabilities of DNNs along with the distinct infrastructure of the cyber-physical-system (CPS) can favor the attackers to bypass the detection mechanism. Moreover, the divergent nature of CPS engenders limitations to the conventional defense mechanisms for False Data Injection Attacks. In this paper, we propose a DNN framework with an additional layer that utilizes randomization to mitigate the adversarial effect by padding the inputs. The primary advantage of our method is when deployed to a DNN model it has a trivial impact on the model's performance even with larger padding sizes. We demonstrate the favorable outcome of the framework through simulation using the IEEE 14-bus, 30-bus, 118-bus, and 300-bus systems. Furthermore to justify the framework we select attack techniques that generate subtle adversarial examples that can bypass the detection mechanism effortlessly.
View on arXiv@article{riya2025_2301.12487, title={ Mitigating Adversarial Effects of False Data Injection Attacks in Power Grid }, author={ Farhin Farhad Riya and Shahinul Hoque and Yingyuan Yang and Jiangnan Li and Jinyuan Stella Sun and Hairong Qi }, journal={arXiv preprint arXiv:2301.12487}, year={ 2025 } }