ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.10464
  4. Cited By
On the Power of Pre-training for Generalization in RL: Provable Benefits
  and Hardness

On the Power of Pre-training for Generalization in RL: Provable Benefits and Hardness

19 October 2022
Haotian Ye
Xiaoyu Chen
Liwei Wang
S. Du
    OffRL
ArXivPDFHTML

Papers citing "On the Power of Pre-training for Generalization in RL: Provable Benefits and Hardness"

5 / 5 papers shown
Title
A Classification View on Meta Learning Bandits
A Classification View on Meta Learning Bandits
Mirco Mutti
Jeongyeol Kwon
Shie Mannor
Aviv Tamar
23
0
0
06 Apr 2025
Hybrid Transfer Reinforcement Learning: Provable Sample Efficiency from
  Shifted-Dynamics Data
Hybrid Transfer Reinforcement Learning: Provable Sample Efficiency from Shifted-Dynamics Data
Chengrui Qu
Laixi Shi
Kishan Panaganti
Pengcheng You
Adam Wierman
OffRL
OnRL
36
0
0
06 Nov 2024
Test-Time Regret Minimization in Meta Reinforcement Learning
Test-Time Regret Minimization in Meta Reinforcement Learning
Mirco Mutti
Aviv Tamar
23
4
0
04 Jun 2024
Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit
  Partial Observability
Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability
Dibya Ghosh
Jad Rahme
Aviral Kumar
Amy Zhang
Ryan P. Adams
Sergey Levine
OffRL
272
109
0
13 Jul 2021
When Is Generalizable Reinforcement Learning Tractable?
When Is Generalizable Reinforcement Learning Tractable?
Dhruv Malik
Yuanzhi Li
Pradeep Ravikumar
OffRL
84
24
0
01 Jan 2021
1