ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.09836
  4. Cited By
Revisiting the Minimalist Approach to Offline Reinforcement Learning
v1v2 (latest)

Revisiting the Minimalist Approach to Offline Reinforcement Learning

Neural Information Processing Systems (NeurIPS), 2023
16 May 2023
Denis Tarasov
Vladislav Kurenkov
Alexander Nikulin
Sergey Kolesnikov
    OffRL
ArXiv (abs)PDFHTMLHuggingFace (3 upvotes)

Papers citing "Revisiting the Minimalist Approach to Offline Reinforcement Learning"

21 / 21 papers shown
Title
Diffusion Policies with Value-Conditional Optimization for Offline Reinforcement Learning
Diffusion Policies with Value-Conditional Optimization for Offline Reinforcement Learning
Yunchang Ma
Tenglong Liu
Yixing Lan
Xin Yin
Changxin Zhang
Xinglong Zhang
Xin Xu
OffRL
191
0
0
12 Nov 2025
Multi-agent Coordination via Flow Matching
Multi-agent Coordination via Flow Matching
Dongsu Lee
Daehee Lee
Amy Zhang
91
0
0
07 Nov 2025
Generalizing Beyond Suboptimality: Offline Reinforcement Learning Learns Effective Scheduling through Random Data
Generalizing Beyond Suboptimality: Offline Reinforcement Learning Learns Effective Scheduling through Random Data
Jesse van Remmerden
Zaharah Bukhsh
Yingqian Zhang
OffRLOnRL
168
0
0
12 Sep 2025
floq: Training Critics via Flow-Matching for Scaling Compute in Value-Based RL
floq: Training Critics via Flow-Matching for Scaling Compute in Value-Based RL
Bhavya Agrawalla
Michal Nauman
Khush Agarwal
Aviral Kumar
OffRL
196
1
0
08 Sep 2025
Penalizing Infeasible Actions and Reward Scaling in Reinforcement Learning with Offline Data
Penalizing Infeasible Actions and Reward Scaling in Reinforcement Learning with Offline Data
Jeonghye Kim
Yongjae Shin
Whiyoung Jung
Sunghoon Hong
Deunsol Yoon
Y. Sung
Kanghoon Lee
Woohyung Lim
OffRL
265
0
0
11 Jul 2025
Reinforcement Learning with Action Chunking
Reinforcement Learning with Action Chunking
Qiyang Li
Zhiyuan Zhou
Sergey Levine
OffRLOnRL
316
19
0
10 Jul 2025
Steering Your Diffusion Policy with Latent Space Reinforcement Learning
Steering Your Diffusion Policy with Latent Space Reinforcement Learning
Andrew Wagenmaker
Mitsuhiko Nakamoto
Yunchu Zhang
S. Park
Waleed Yagoub
Anusha Nagabandi
Abhishek Gupta
Sergey Levine
OffRL
277
22
0
18 Jun 2025
Intention-Conditioned Flow Occupancy Models
Chongyi Zheng
S. Park
Sergey Levine
Benjamin Eysenbach
AI4TSOffRLAI4CE
236
2
0
10 Jun 2025
Horizon Reduction Makes RL Scalable
Horizon Reduction Makes RL Scalable
Seohong Park
Kevin Frans
Deepinder Mann
Benjamin Eysenbach
Aviral Kumar
Sergey Levine
OffRL
494
14
0
04 Jun 2025
Normalizing Flows are Capable Models for RL
Normalizing Flows are Capable Models for RL
Raj Ghugare
Benjamin Eysenbach
OffRLAI4CE
306
4
0
29 May 2025
SOReL and TOReL: Two Methods for Fully Offline Reinforcement Learning
SOReL and TOReL: Two Methods for Fully Offline Reinforcement Learning
Mattie Fellows
Clarisse Wibault
Uljad Berdica
Johannes Forkel
Jakob Foerster
Michael A. Osborne
OffRLOnRL
275
0
0
28 May 2025
Scaling Offline RL via Efficient and Expressive Shortcut Models
Scaling Offline RL via Efficient and Expressive Shortcut Models
Nicolas Espinosa-Dice
Yiyi Zhang
Yiding Chen
Bradley Guo
Owen Oertell
Gokul Swamy
Kianté Brantley
Wen Sun
OffRLLRM
215
3
0
28 May 2025
An Optimal Discriminator Weighted Imitation Perspective for Reinforcement Learning
An Optimal Discriminator Weighted Imitation Perspective for Reinforcement LearningInternational Conference on Learning Representations (ICLR), 2025
Haoran Xu
Shuozhe Li
Harshit S. Sikchi
S. Niekum
Amy Zhang
OffRL
315
1
0
17 Apr 2025
Yes, Q-learning Helps Offline In-Context RL
Yes, Q-learning Helps Offline In-Context RL
Denis Tarasov
Alexander Nikulin
Ilya Zisman
Albina Klepach
Andrei Polubarov
Nikita Lyubaykin
Alexander Derevyagin
Igor Kiselev
Vladislav Kurenkov
OffRLOnRL
853
6
0
24 Feb 2025
B3C: A Minimalist Approach to Offline Multi-Agent Reinforcement Learning
B3C: A Minimalist Approach to Offline Multi-Agent Reinforcement Learning
Woojun Kim
Katia Sycara
OffRL
303
0
0
30 Jan 2025
Leveraging Skills from Unlabeled Prior Data for Efficient Online Exploration
Leveraging Skills from Unlabeled Prior Data for Efficient Online Exploration
Max Wilcoxson
Qiyang Li
Kevin Frans
Sergey Levine
SSLOffRLOnRL
638
5
0
23 Oct 2024
Is Value Functions Estimation with Classification Plug-and-play for
  Offline Reinforcement Learning?
Is Value Functions Estimation with Classification Plug-and-play for Offline Reinforcement Learning?
Denis Tarasov
Kirill Brilliantov
Dmitrii Kharlapenko
OffRL
168
3
0
10 Jun 2024
Dissecting Deep RL with High Update Ratios: Combatting Value Divergence
Dissecting Deep RL with High Update Ratios: Combatting Value Divergence
Marcel Hussing
C. Voelcker
Igor Gilitschenski
Amir-massoud Farahmand
Eric Eaton
281
11
0
09 Mar 2024
Unleashing the Power of Pre-trained Language Models for Offline
  Reinforcement Learning
Unleashing the Power of Pre-trained Language Models for Offline Reinforcement LearningInternational Conference on Learning Representations (ICLR), 2023
Ruizhe Shi
Yuyao Liu
Yanjie Ze
Simon S. Du
Huazhe Xu
OffRLRALM
384
32
0
31 Oct 2023
Katakomba: Tools and Benchmarks for Data-Driven NetHack
Katakomba: Tools and Benchmarks for Data-Driven NetHackNeural Information Processing Systems (NeurIPS), 2023
Vladislav Kurenkov
Alexander Nikulin
Denis Tarasov
Sergey Kolesnikov
OffRL
223
5
0
14 Jun 2023
CORL: Research-oriented Deep Offline Reinforcement Learning Library
CORL: Research-oriented Deep Offline Reinforcement Learning LibraryNeural Information Processing Systems (NeurIPS), 2022
Denis Tarasov
Alexander Nikulin
Dmitry Akimov
Vladislav Kurenkov
Sergey Kolesnikov
OffRL
359
123
0
13 Oct 2022
1