101

Sample-Efficient Expert Query Control in Active Imitation Learning via Conformal Prediction

Main:7 Pages
4 Figures
Bibliography:1 Pages
5 Tables
Abstract

Active imitation learning (AIL) combats covariate shift by querying an expert during training. However, expert action labeling often dominates the cost, especially in GPU-intensive simulators, human-in-the-loop settings, and robot fleets that revisit near-duplicate states. We present Conformalized Rejection Sampling for Active Imitation Learning (CRSAIL), a querying rule that requests an expert action only when the visited state is under-represented in the expert-labeled dataset. CRSAIL scores state novelty by the distance to the KK-th nearest expert state and sets a single global threshold via conformal prediction. This threshold is the empirical (1α)(1-\alpha) quantile of on-policy calibration scores, providing a distribution-free calibration rule that links α\alpha to the expected query rate and makes α\alpha a task-agnostic tuning knob. This state-space querying strategy is robust to outliers and, unlike safety-gate-based AIL, can be run without real-time expert takeovers: we roll out full trajectories (episodes) with the learner and only afterward query the expert on a subset of visited states. Evaluated on MuJoCo robotics tasks, CRSAIL matches or exceeds expert-level reward while reducing total expert queries by up to 96% vs. DAgger and up to 65% vs. prior AIL methods, with empirical robustness to α\alpha and KK, easing deployment on novel systems with unknown dynamics.

View on arXiv
Comments on this paper