195

Verifying Graph Neural Networks with Readout is Intractable

Main:9 Pages
10 Figures
Bibliography:5 Pages
37 Tables
Appendix:30 Pages
Abstract

We introduce a logical language for reasoning about quantized aggregate-combine graph neural networks with global readout (ACR-GNNs). We provide a logical characterization and use it to prove that verification tasks for quantized GNNs with readout are (co)NEXPTIME-complete. This result implies that the verification of quantized GNNs is computationally intractable, prompting substantial research efforts toward ensuring the safety of GNN-based systems. We also experimentally demonstrate that quantized ACR-GNN models are lightweight while maintaining good accuracy and generalization capabilities with respect to non-quantized models.

View on arXiv
Comments on this paper