148

On the Probabilistic Learnability of Compact Neural Network Preimage Bounds

Main:7 Pages
4 Figures
Bibliography:1 Pages
2 Tables
Appendix:1 Pages
Abstract

Although recent provable methods have been developed to compute preimage bounds for neural networks, their scalability is fundamentally limited by the #P-hardness of the problem. In this work, we adopt a novel probabilistic perspective, aiming to deliver solutions with high-confidence guarantees and bounded error. To this end, we investigate the potential of bootstrap-based and randomized approaches that are capable of capturing complex patterns in high-dimensional spaces, including input regions where a given output property holds. In detail, we introduce R\textbf{R}andom F\textbf{F}orest Pro\textbf{Pro}perty Ve\textbf{Ve}rifier (RF-ProVe\texttt{RF-ProVe}), a method that exploits an ensemble of randomized decision trees to generate candidate input regions satisfying a desired output property and refines them through active resampling. Our theoretical derivations offer formal statistical guarantees on region purity and global coverage, providing a practical, scalable solution for computing compact preimage approximations in cases where exact solvers fail to scale.

View on arXiv
Comments on this paper