36
7

Distributionally Robust Statistical Verification with Imprecise Neural Networks

Abstract

A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems. Verification approaches centered around reachability analysis fail to scale, and purely statistical approaches are constrained by the distributional assumptions about the sampling process. Instead, we pose a distributionally robust version of the statistical verification problem for black-box systems, where our performance guarantees hold over a large family of distributions. This paper proposes a novel approach based on uncertainty quantification using concepts from imprecise probabilities. A central piece of our approach is an ensemble technique called Imprecise Neural Networks, which provides the uncertainty quantification. Additionally, we solve the allied problem of exploring the input set using active learning. The active learning uses an exhaustive neural-network verification tool Sherlock to collect samples. An evaluation on multiple physical simulators in the openAI gym Mujoco environments with reinforcement-learned controllers demonstrates that our approach can provide useful and scalable guarantees for high-dimensional systems.

View on arXiv
@article{dutta2025_2308.14815,
  title={ Distributionally Robust Statistical Verification with Imprecise Neural Networks },
  author={ Souradeep Dutta and Michele Caprio and Vivian Lin and Matthew Cleaveland and Kuk Jin Jang and Ivan Ruchkin and Oleg Sokolsky and Insup Lee },
  journal={arXiv preprint arXiv:2308.14815},
  year={ 2025 }
}
Comments on this paper