ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.07838
22
3

RadGrad: Active learning with loss gradients

18 June 2019
Paul Budnarain
Renato Ferreira Pinto Junior
Ilan Kogan
ArXivPDFHTML
Abstract

Solving sequential decision prediction problems, including those in imitation learning settings, requires mitigating the problem of covariate shift. The standard approach, DAgger, relies on capturing expert behaviour in all states that the agent reaches. In real-world settings, querying an expert is costly. We propose a new active learning algorithm that selectively queries the expert, based on both a prediction of agent error and a proxy for agent risk, that maintains the performance of unrestrained expert querying systems while substantially reducing the number of expert queries made. We show that our approach, RadGrad, has the potential to improve upon existing safety-aware algorithms, and matches or exceeds the performance of DAgger and variants (i.e., SafeDAgger) in one simulated environment. However, we also find that a more complex environment poses challenges not only to our proposed method, but also to existing safety-aware algorithms, which do not match the performance of DAgger in our experiments.

View on arXiv
Comments on this paper