Efficient Evaluation of Multi-Task Robot Policies With Active Experiment Selection

Evaluating learned robot control policies to determine their physical task-level capabilities costs experimenter time and effort. The growing number of policies and tasks exacerbates this issue. It is impractical to test every policy on every task multiple times; each trial requires a manual environment reset, and each task change involves re-arranging objects or even changing robots. Naively selecting a random subset of tasks and policies to evaluate is a high-cost solution with unreliable, incomplete results. In this work, we formulate robot evaluation as an active testing problem. We propose to model the distribution of robot performance across all tasks and policies as we sequentially execute experiments. Tasks often share similarities that can reveal potential relationships in policy behavior, and we show that natural language is a useful prior in modeling these relationships between tasks. We then leverage this formulation to reduce the experimenter effort by using a cost-aware expected information gain heuristic to efficiently select informative trials. Our framework accommodates both continuous and discrete performance outcomes. We conduct experiments on existing evaluation data from real robots and simulations. By prioritizing informative trials, our framework reduces the cost of calculating evaluation metrics for robot policies across many tasks.
View on arXiv@article{anwar2025_2502.09829, title={ Efficient Evaluation of Multi-Task Robot Policies With Active Experiment Selection }, author={ Abrar Anwar and Rohan Gupta and Zain Merchant and Sayan Ghosh and Willie Neiswanger and Jesse Thomason }, journal={arXiv preprint arXiv:2502.09829}, year={ 2025 } }