ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.10573
  4. Cited By
IDQL: Implicit Q-Learning as an Actor-Critic Method with Diffusion
  Policies

IDQL: Implicit Q-Learning as an Actor-Critic Method with Diffusion Policies

20 April 2023
Philippe Hansen-Estruch
Ilya Kostrikov
Michael Janner
J. Kuba
Sergey Levine
    OffRL
ArXivPDFHTML

Papers citing "IDQL: Implicit Q-Learning as an Actor-Critic Method with Diffusion Policies"

29 / 29 papers shown
Title
What Matters for Batch Online Reinforcement Learning in Robotics?
What Matters for Batch Online Reinforcement Learning in Robotics?
Perry Dong
Suvir Mirchandani
Dorsa Sadigh
Chelsea Finn
OffRL
14
0
0
12 May 2025
Wasserstein Convergence of Score-based Generative Models under Semiconvexity and Discontinuous Gradients
Wasserstein Convergence of Score-based Generative Models under Semiconvexity and Discontinuous Gradients
Stefano Bruno
Sotirios Sabanis
DiffM
34
0
0
06 May 2025
Analytic Energy-Guided Policy Optimization for Offline Reinforcement Learning
Analytic Energy-Guided Policy Optimization for Offline Reinforcement Learning
Jifeng Hu
Sili Huang
Z. Yang
Shengchao Hu
Li Shen
H. Chen
Lichao Sun
Yi-Ju Chang
Dacheng Tao
OffRL
53
0
0
03 May 2025
Uncertainty Comes for Free: Human-in-the-Loop Policies with Diffusion Models
Uncertainty Comes for Free: Human-in-the-Loop Policies with Diffusion Models
Zhanpeng He
Yifeng Cao
M. Ciocarlie
52
0
0
26 Feb 2025
Hyperspherical Normalization for Scalable Deep Reinforcement Learning
Hyperspherical Normalization for Scalable Deep Reinforcement Learning
Hojoon Lee
Youngdo Lee
Takuma Seno
Donghu Kim
Peter Stone
Jaegul Choo
63
1
0
24 Feb 2025
Learning a Diffusion Model Policy from Rewards via Q-Score Matching
Learning a Diffusion Model Policy from Rewards via Q-Score Matching
Michael Psenka
Alejandro Escontrela
Pieter Abbeel
Yi-An Ma
DiffM
86
20
0
17 Feb 2025
Skill Expansion and Composition in Parameter Space
Skill Expansion and Composition in Parameter Space
Tenglong Liu
J. Li
Yinan Zheng
Haoyi Niu
Yixing Lan
Xin Xu
Xianyuan Zhan
51
4
0
09 Feb 2025
Planning-Guided Diffusion Policy Learning for Generalizable Contact-Rich Bimanual Manipulation
Planning-Guided Diffusion Policy Learning for Generalizable Contact-Rich Bimanual Manipulation
Xuanlin Li
Tong Zhao
Xinghao Zhu
Jiuguang Wang
Tao Pang
Kuan Fang
82
4
0
03 Dec 2024
Enhancing Exploration with Diffusion Policies in Hybrid Off-Policy RL: Application to Non-Prehensile Manipulation
Enhancing Exploration with Diffusion Policies in Hybrid Off-Policy RL: Application to Non-Prehensile Manipulation
Huy Le
Miroslav Gabriel
Tai Hoang
Gerhard Neumann
Ngo Anh Vien
99
1
0
22 Nov 2024
Q-Distribution guided Q-learning for offline reinforcement learning: Uncertainty penalized Q-value via consistency model
Q-Distribution guided Q-learning for offline reinforcement learning: Uncertainty penalized Q-value via consistency model
Jing Zhang
Linjiajie Fang
Kexin Shi
Wenjia Wang
Bing-Yi Jing
OffRL
27
0
0
27 Oct 2024
Leveraging Skills from Unlabeled Prior Data for Efficient Online Exploration
Leveraging Skills from Unlabeled Prior Data for Efficient Online Exploration
Max Wilcoxson
Qiyang Li
Kevin Frans
Sergey Levine
SSL
OffRL
OnRL
54
0
0
23 Oct 2024
On Diffusion Models for Multi-Agent Partial Observability: Shared Attractors, Error Bounds, and Composite Flow
On Diffusion Models for Multi-Agent Partial Observability: Shared Attractors, Error Bounds, and Composite Flow
Tonghan Wang
Heng Dong
Yanchen Jiang
David C. Parkes
Milind Tambe
DiffM
37
2
0
17 Oct 2024
Steering Your Generalists: Improving Robotic Foundation Models via Value Guidance
Steering Your Generalists: Improving Robotic Foundation Models via Value Guidance
Mitsuhiko Nakamoto
Oier Mees
Aviral Kumar
Sergey Levine
OffRL
71
9
0
17 Oct 2024
DIAR: Diffusion-model-guided Implicit Q-learning with Adaptive
  Revaluation
DIAR: Diffusion-model-guided Implicit Q-learning with Adaptive Revaluation
Jaehyun Park
Yunho Kim
Sejin Kim
Byung-Jun Lee
Sundong Kim
OffRL
13
1
0
15 Oct 2024
Residual Learning and Context Encoding for Adaptive Offline-to-Online
  Reinforcement Learning
Residual Learning and Context Encoding for Adaptive Offline-to-Online Reinforcement Learning
Mohammadreza Nakhaei
Aidan Scannell
J. Pajarinen
OffRL
43
1
0
12 Jun 2024
UDQL: Bridging The Gap between MSE Loss and The Optimal Value Function
  in Offline Reinforcement Learning
UDQL: Bridging The Gap between MSE Loss and The Optimal Value Function in Offline Reinforcement Learning
Yu Zhang
Rui Yu
Zhipeng Yao
Wenyuan Zhang
Jun Wang
Liming Zhang
OffRL
35
0
0
05 Jun 2024
Amortizing intractable inference in diffusion models for vision, language, and control
Amortizing intractable inference in diffusion models for vision, language, and control
S. Venkatraman
Moksh Jain
Luca Scimeca
Minsu Kim
Marcin Sendera
...
Alexandre Adam
Jarrid Rector-Brooks
Yoshua Bengio
Glen Berseth
Nikolay Malkin
60
24
0
31 May 2024
Diffusion Actor-Critic: Formulating Constrained Policy Iteration as Diffusion Noise Regression for Offline Reinforcement Learning
Diffusion Actor-Critic: Formulating Constrained Policy Iteration as Diffusion Noise Regression for Offline Reinforcement Learning
Linjiajie Fang
Ruoxue Liu
Jing Zhang
Wenjia Wang
Bing-Yi Jing
OffRL
41
1
0
31 May 2024
DNAct: Diffusion Guided Multi-Task 3D Policy Learning
DNAct: Diffusion Guided Multi-Task 3D Policy Learning
Ge Yan
Yueh-hua Wu
Xiaolong Wang
VGen
27
20
0
07 Mar 2024
Boosting Continuous Control with Consistency Policy
Boosting Continuous Control with Consistency Policy
Yuhui Chen
Haoran Li
Dongbin Zhao
OffRL
27
18
0
10 Oct 2023
H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps
H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps
Haoyi Niu
Tianying Ji
Bingqi Liu
Haocheng Zhao
Xiangyu Zhu
Jianying Zheng
Pengfei Huang
Guyue Zhou
Jianming Hu
Xianyuan Zhan
OffRL
OnRL
AI4CE
25
6
0
22 Sep 2023
LLQL: Logistic Likelihood Q-Learning for Reinforcement Learning
LLQL: Logistic Likelihood Q-Learning for Reinforcement Learning
Outongyi Lv
Bingxin Zhou
OffRL
21
0
0
05 Jul 2023
Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online
  Fine-Tuning
Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning
Mitsuhiko Nakamoto
Yuexiang Zhai
Anika Singh
Max Sobol Mark
Yi-An Ma
Chelsea Finn
Aviral Kumar
Sergey Levine
OffRL
OnRL
109
108
0
09 Mar 2023
Offline Reinforcement Learning via High-Fidelity Generative Behavior
  Modeling
Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling
Huayu Chen
Cheng Lu
Chengyang Ying
Hang Su
Jun Zhu
DiffM
OffRL
85
103
0
29 Sep 2022
Planning with Diffusion for Flexible Behavior Synthesis
Planning with Diffusion for Flexible Behavior Synthesis
Michael Janner
Yilun Du
J. Tenenbaum
Sergey Levine
DiffM
202
622
0
20 May 2022
Offline Reinforcement Learning with Implicit Q-Learning
Offline Reinforcement Learning with Implicit Q-Learning
Ilya Kostrikov
Ashvin Nair
Sergey Levine
OffRL
206
832
0
12 Oct 2021
EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline
  and Online RL
EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL
Seyed Kamyar Seyed Ghasemipour
Dale Schuurmans
S. Gu
OffRL
199
119
0
21 Jul 2020
Controlling Overestimation Bias with Truncated Mixture of Continuous
  Distributional Quantile Critics
Controlling Overestimation Bias with Truncated Mixture of Continuous Distributional Quantile Critics
Arsenii Kuznetsov
Pavel Shvechikov
Alexander Grishin
Dmitry Vetrov
131
184
0
08 May 2020
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
  Open Problems
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
321
1,944
0
04 May 2020
1