ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.04258
38
2
v1v2v3 (latest)

Learning to Switch Between Machines and Humans

11 February 2020
Vahid Balazadeh Meresht
Abir De
Adish Singla
Manuel Gomez-Rodriguez
ArXiv (abs)PDFHTML
Abstract

Reinforcement learning agents have been mostly developed and evaluated under the assumption that they will operate in a fully autonomous manner -- they will take all actions. In this work, our goal is to develop algorithms that, by learning to switch control between machine and human agents, allow existing reinforcement learning agents to operate under different automation levels. To this end, we first formally define the problem of learning to switch control among agents in a team via a 2-layer Markov decision process. Then, we develop an online learning algorithm that uses upper confidence bounds on the agents' policies and the environment's transition probabilities to find a sequence of switching policies. We prove that the total regret of our algorithm with respect to the optimal switching policy is sublinear in the number of learning steps. Moreover, we also show that our algorithm can be used to find multiple sequences of switching policies across several independent teams of agents operating in similar environments, where it greatly benefits from maintaining shared confidence bounds for the environments' transition probabilities. Simulation experiments in obstacle avoidance in a semi-autonomous driving scenario illustrate our theoretical findings and demonstrate that, by exploiting the specific structure of the problem, our proposed algorithm is superior to problem-agnostic algorithms.

View on arXiv
Comments on this paper