Learning Verifiable Control Policies Using Relaxed Verification

To provide safety guarantees for learning-based control systems, recent work has developed formal verification methods to apply after training ends. However, if the trained policy does not meet the specifications, or there is conservatism in the verification algorithm, establishing these guarantees may not be possible. Instead, this work proposes to perform verification throughout training to ultimately aim for policies whose properties can be evaluated throughout runtime with lightweight, relaxed verification algorithms. The approach is to use differentiable reachability analysis and incorporate new components into the loss function. Numerical experiments on a quadrotor model and unicycle model highlight the ability of this approach to lead to learned control policies that satisfy desired reach-avoid and invariance specifications.
View on arXiv@article{chaudhury2025_2504.16879, title={ Learning Verifiable Control Policies Using Relaxed Verification }, author={ Puja Chaudhury and Alexander Estornell and Michael Everett }, journal={arXiv preprint arXiv:2504.16879}, year={ 2025 } }