Learning Vision-Based Neural Network Controllers with Semi-Probabilistic Safety Guarantees
Ensuring safety in autonomous systems with vision-based control remains a critical challenge due to the high dimensionality of image inputs and the fact that the relationship between true system state and its visual manifestation is unknown. Existing methods for learning-based control in such settings typically lack formal safety guarantees. To address this challenge, we introduce a novel semi-probabilistic verification framework that integrates reachability analysis with conditional generative adversarial networks and distribution-free tail bounds to enable efficient and scalable verification of vision-based neural network controllers. Next, we develop a gradient-based training approach that employs a novel safety loss function, safety-aware data-sampling strategy to efficiently select and store critical training examples, and curriculum learning, to efficiently synthesize safe controllers in the semi-probabilistic framework. Empirical evaluations in X-Plane 11 airplane landing simulation, CARLA-simulated autonomous lane following, and F1Tenth lane following in a physical visually-rich miniature environment demonstrate the effectiveness of our method in achieving formal safety guarantees while maintaining strong nominal performance. Our code is available atthis https URL.
View on arXiv@article{ma2025_2503.00191, title={ Learning Vision-Based Neural Network Controllers with Semi-Probabilistic Safety Guarantees }, author={ Xinhang Ma and Junlin Wu and Hussein Sibai and Yiannis Kantaros and Yevgeniy Vorobeychik }, journal={arXiv preprint arXiv:2503.00191}, year={ 2025 } }