11
0

Hessian-aware Training for Enhancing DNNs Resilience to Parameter Corruptions

Abstract

Deep neural networks are not resilient to parameter corruptions: even a single-bitwise error in their parameters in memory can cause an accuracy drop of over 10%, and in the worst cases, up to 99%. This susceptibility poses great challenges in deploying models on computing platforms, where adversaries can induce bit-flips through software or bitwise corruptions may occur naturally. Most prior work addresses this issue with hardware or system-level approaches, such as integrating additional hardware components to verify a model's integrity at inference. However, these methods have not been widely deployed as they require infrastructure or platform-wide modifications.In this paper, we propose a new approach to addressing this issue: training models to be more resilient to bitwise corruptions to their parameters. Our approach, Hessian-aware training, promotes models with flatterflatter loss surfaces. We show that, while there have been training methods, designed to improve generalization through Hessian-based approaches, they do not enhance resilience to parameter corruptions. In contrast, models trained with our method demonstrate increased resilience to parameter corruptions, particularly with a 20-50% reduction in the number of bits whose individual flipping leads to a 90-100% accuracy drop. Moreover, we show the synergy between ours and existing hardware and system-level defenses.

View on arXiv
@article{prato2025_2504.01933,
  title={ Hessian-aware Training for Enhancing DNNs Resilience to Parameter Corruptions },
  author={ Tahmid Hasan Prato and Seijoon Kim and Lizhong Chen and Sanghyun Hong },
  journal={arXiv preprint arXiv:2504.01933},
  year={ 2025 }
}
Comments on this paper