ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.15930
  4. Cited By
Offline Supervised Learning V.S. Online Direct Policy Optimization: A
  Comparative Study and A Unified Training Paradigm for Neural Network-Based
  Optimal Feedback Control

Offline Supervised Learning V.S. Online Direct Policy Optimization: A Comparative Study and A Unified Training Paradigm for Neural Network-Based Optimal Feedback Control

29 November 2022
Yue Zhao
Jiequn Han
    OffRL
ArXivPDFHTML

Papers citing "Offline Supervised Learning V.S. Online Direct Policy Optimization: A Comparative Study and A Unified Training Paradigm for Neural Network-Based Optimal Feedback Control"

3 / 3 papers shown
Title
Learning Free Terminal Time Optimal Closed-loop Control of Manipulators
Learning Free Terminal Time Optimal Closed-loop Control of Manipulators
Wei Hu
Yue Zhao
E. Weinan
Jiequn Han
Jihao Long
22
0
0
29 Nov 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
319
11,953
0
04 Mar 2022
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
  Open Problems
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
340
1,960
0
04 May 2020
1