ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.03298
  4. Cited By
What Matters in Learning from Offline Human Demonstrations for Robot
  Manipulation

What Matters in Learning from Offline Human Demonstrations for Robot Manipulation

6 August 2021
Ajay Mandlekar
Danfei Xu
J. Wong
Soroush Nasiriany
Chen Wang
Rohun Kulkarni
Li Fei-Fei
Silvio Savarese
Yuke Zhu
Roberto Martín-Martín
    OffRL
ArXivPDFHTML

Papers citing "What Matters in Learning from Offline Human Demonstrations for Robot Manipulation"

14 / 14 papers shown
Title
Fast Flow-based Visuomotor Policies via Conditional Optimal Transport Couplings
Fast Flow-based Visuomotor Policies via Conditional Optimal Transport Couplings
Andreas Sochopoulos
Nikolay Malkin
Nikolaos Tsagkas
João Moura
Michael Gienger
S. Vijayakumar
29
68
0
02 May 2025
J-PARSE: Jacobian-based Projection Algorithm for Resolving Singularities Effectively in Inverse Kinematic Control of Serial Manipulators
J-PARSE: Jacobian-based Projection Algorithm for Resolving Singularities Effectively in Inverse Kinematic Control of Serial Manipulators
Shivani Guptasarma
Matthew Strong
HongHao Zhen
Monroe Kennedy III
21
0
0
01 May 2025
RoboGround: Robotic Manipulation with Grounded Vision-Language Priors
RoboGround: Robotic Manipulation with Grounded Vision-Language Priors
Haifeng Huang
Xinyi Chen
Y. Chen
H. Li
Xiaoshen Han
Z. Wang
Tai Wang
Jiangmiao Pang
Zhou Zhao
LM&Ro
72
48
0
30 Apr 2025
PRISM-DP: Spatial Pose-based Observations for Diffusion-Policies via Segmentation, Mesh Generation, and Pose Tracking
PRISM-DP: Spatial Pose-based Observations for Diffusion-Policies via Segmentation, Mesh Generation, and Pose Tracking
Xiatao Sun
Yinxing Chen
Daniel Rakita
VGen
39
0
0
29 Apr 2025
Robot Motion Planning using One-Step Diffusion with Noise-Optimized Approximate Motions
Robot Motion Planning using One-Step Diffusion with Noise-Optimized Approximate Motions
Tomoharu Aizu
Takeru Oba
Yuki Kondo
Norimichi Ukita
DiffM
65
31
0
28 Apr 2025
RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning
RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning
Haoran Geng
Feishi Wang
Songlin Wei
Y. Li
Bangjun Wang
...
Hao Dong
Siyuan Huang
Yue Wang
Jitendra Malik
Pieter Abbeel
64
130
0
26 Apr 2025
Offline Learning of Controllable Diverse Behaviors
Offline Learning of Controllable Diverse Behaviors
Mathieu Petitbois
Rémy Portelas
Sylvain Lamprier
Ludovic Denoyer
OffRL
27
0
0
25 Apr 2025
DiffOG: Differentiable Policy Trajectory Optimization with Generalizability
DiffOG: Differentiable Policy Trajectory Optimization with Generalizability
Zhengtong Xu
Zichen Miao
Qiang Qiu
Zhe Zhang
Yu She
40
0
0
18 Apr 2025
Is Your Imitation Learning Policy Better than Mine? Policy Comparison with Near-Optimal Stopping
Is Your Imitation Learning Policy Better than Mine? Policy Comparison with Near-Optimal Stopping
David Snyder
Asher Hancock
Apurva Badithela
Emma Dixon
Patrick "Tree" Miller
Rares Ambrus
Anirudha Majumdar
Masha Itkina
Haruki Nishimura
OffRL
57
1
0
14 Mar 2025
Can We Detect Failures Without Failure Data? Uncertainty-Aware Runtime Failure Detection for Imitation Learning Policies
Can We Detect Failures Without Failure Data? Uncertainty-Aware Runtime Failure Detection for Imitation Learning Policies
Chen Xu
Tony Nguyen
Emma Dixon
Christopher Rodriguez
Patrick "Tree" Miller
Robert Lee
Paarth Shah
Rares Ambrus
Haruki Nishimura
Masha Itkina
OffRL
69
0
0
11 Mar 2025
PLUM: Improving Inference Efficiency By Leveraging Repetition-Sparsity Trade-Off
PLUM: Improving Inference Efficiency By Leveraging Repetition-Sparsity Trade-Off
Sachit Kuhar
Yash Jain
Alexey Tumanov
MQ
44
0
0
04 Dec 2023
COMBO: Conservative Offline Model-Based Policy Optimization
COMBO: Conservative Offline Model-Based Policy Optimization
Tianhe Yu
Aviral Kumar
Rafael Rafailov
Aravind Rajeswaran
Sergey Levine
Chelsea Finn
OffRL
180
338
0
16 Feb 2021
EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline
  and Online RL
EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL
Seyed Kamyar Seyed Ghasemipour
Dale Schuurmans
S. Gu
OffRL
188
106
0
21 Jul 2020
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
  Open Problems
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
321
1,662
0
04 May 2020
1