153
0

Amortized Safe Active Learning for Real-Time Data Acquisition: Pretrained Neural Policies from Simulated Nonparametric Functions

Abstract

Safe active learning (AL) is a sequential scheme for learning unknown systems while respecting safety constraints during data acquisition. Existing methods often rely on Gaussian processes (GPs) to model the task and safety constraints, requiring repeated GP updates and constrained acquisition optimization-incurring in significant computations which are challenging for real-time decision-making. We propose an amortized safe AL framework that replaces expensive online computations with a pretrained neural policy. Inspired by recent advances in amortized Bayesian experimental design, we turn GPs into a pretraining simulator. We train our policy prior to the AL deployment on simulated nonparametric functions, using Fourier feature-based GP sampling and a differentiable, safety-aware acquisition objective. At deployment, our policy selects safe and informative queries via a single forward pass, eliminating the need for GP inference or constrained optimization. This leads to substantial speed improvements while preserving safety and learning quality. Our framework is modular and can be adapted to unconstrained, time-sensitive AL tasks by omitting the safety requirement.

View on arXiv
@article{li2025_2501.15458,
  title={ Amortized Safe Active Learning for Real-Time Data Acquisition: Pretrained Neural Policies from Simulated Nonparametric Functions },
  author={ Cen-You Li and Marc Toussaint and Barbara Rakitsch and Christoph Zimmer },
  journal={arXiv preprint arXiv:2501.15458},
  year={ 2025 }
}
Comments on this paper