ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.10966
50
1

Is Your Imitation Learning Policy Better than Mine? Policy Comparison with Near-Optimal Stopping

14 March 2025
David Snyder
Asher Hancock
Apurva Badithela
Emma Dixon
Patrick "Tree" Miller
Rares Ambrus
Anirudha Majumdar
Masha Itkina
Haruki Nishimura
    OffRL
ArXivPDFHTML
Abstract

Imitation learning has enabled robots to perform complex, long-horizon tasks in challenging dexterous manipulation settings. As new methods are developed, they must be rigorously evaluated and compared against corresponding baselines through repeated evaluation trials. However, policy comparison is fundamentally constrained by a small feasible sample size (e.g., 10 or 50) due to significant human effort and limited inference throughput of policies. This paper proposes a novel statistical framework for rigorously comparing two policies in the small sample size regime. Prior work in statistical policy comparison relies on batch testing, which requires a fixed, pre-determined number of trials and lacks flexibility in adapting the sample size to the observed evaluation data. Furthermore, extending the test with additional trials risks inducing inadvertent p-hacking, undermining statistical assurances. In contrast, our proposed statistical test is sequential, allowing researchers to decide whether or not to run more trials based on intermediate results. This adaptively tailors the number of trials to the difficulty of the underlying comparison, saving significant time and effort without sacrificing probabilistic correctness. Extensive numerical simulation and real-world robot manipulation experiments show that our test achieves near-optimal stopping, letting researchers stop evaluation and make a decision in a near-minimal number of trials. Specifically, it reduces the number of evaluation trials by up to 37% as compared to state-of-the-art baselines, while preserving the probabilistic correctness and statistical power of the comparison. Moreover, our method is strongest in the most challenging comparison instances (requiring the most evaluation trials); in a multi-task comparison scenario, we save the evaluator more than 200 simulation rollouts.

View on arXiv
@article{snyder2025_2503.10966,
  title={ Is Your Imitation Learning Policy Better than Mine? Policy Comparison with Near-Optimal Stopping },
  author={ David Snyder and Asher James Hancock and Apurva Badithela and Emma Dixon and Patrick Miller and Rares Andrei Ambrus and Anirudha Majumdar and Masha Itkina and Haruki Nishimura },
  journal={arXiv preprint arXiv:2503.10966},
  year={ 2025 }
}
Comments on this paper