ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.13900
  4. Cited By
Optimistically Optimistic Exploration for Provably Efficient Infinite-Horizon Reinforcement and Imitation Learning
v1v2 (latest)

Optimistically Optimistic Exploration for Provably Efficient Infinite-Horizon Reinforcement and Imitation Learning

19 February 2025
Antoine Moulin
Gergely Neu
Luca Viano
ArXiv (abs)PDFHTML

Papers citing "Optimistically Optimistic Exploration for Provably Efficient Infinite-Horizon Reinforcement and Imitation Learning"

3 / 3 papers shown
Title
Inverse Q-Learning Done Right: Offline Imitation Learning in $Q^π$-Realizable MDPs
Inverse Q-Learning Done Right: Offline Imitation Learning in QπQ^πQπ-Realizable MDPs
Antoine Moulin
Gergely Neu
Luca Viano
OffRL
124
0
0
26 May 2025
Learning Equilibria from Data: Provably Efficient Multi-Agent Imitation Learning
Learning Equilibria from Data: Provably Efficient Multi-Agent Imitation Learning
Till Freihaut
Luca Viano
Volkan Cevher
Matthieu Geist
Giorgia Ramponi
67
0
0
23 May 2025
IL-SOAR : Imitation Learning with Soft Optimistic Actor cRitic
IL-SOAR : Imitation Learning with Soft Optimistic Actor cRitic
Stefano Viel
Luca Viano
Volkan Cevher
270
1
0
27 Feb 2025
1