ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.04764
  4. Cited By
Continuous-Time Model-Based Reinforcement Learning
v1v2v3 (latest)

Continuous-Time Model-Based Reinforcement Learning

International Conference on Machine Learning (ICML), 2021
9 February 2021
Çağatay Yıldız
Markus Heinonen
Harri Lähdesmäki
    OffRL
ArXiv (abs)PDFHTML

Papers citing "Continuous-Time Model-Based Reinforcement Learning"

35 / 35 papers shown
Title
AD-NODE: Adaptive Dynamics Learning with Neural ODEs for Mobile Robots Control
AD-NODE: Adaptive Dynamics Learning with Neural ODEs for Mobile Robots Control
Shao-Yi Yu
Jen-Wei Wang
Maya Horii
Vikas Garg
Tarek Zohdi
72
0
0
06 Oct 2025
Bridging Discrete and Continuous RL: Stable Deterministic Policy Gradient with Martingale Characterization
Bridging Discrete and Continuous RL: Stable Deterministic Policy Gradient with Martingale Characterization
Ziheng Cheng
Xin Guo
Yufei Zhang
OffRL
70
0
0
28 Sep 2025
Learning non-Markovian Dynamical Systems with Signature-based Encoders
Learning non-Markovian Dynamical Systems with Signature-based Encoders
Eliott Pradeleix
Rémy Hosseinkhan Boucher
Alena Shilova
Onofrio Semeraro
L. Mathelin
60
0
0
15 Sep 2025
Continuous-Time Value Iteration for Multi-Agent Reinforcement Learning
Continuous-Time Value Iteration for Multi-Agent Reinforcement Learning
Xuefeng Wang
Lei Zhang
Henglin Pu
Ahmed H. Qureshi
Husheng Li
165
0
0
11 Sep 2025
Instance-Dependent Continuous-Time Reinforcement Learning via Maximum Likelihood Estimation
Instance-Dependent Continuous-Time Reinforcement Learning via Maximum Likelihood Estimation
Runze Zhao
Yue Yu
Ruhan Wang
Chunfeng Huang
Dongruo Zhou
126
0
0
04 Aug 2025
A Temporal Difference Method for Stochastic Continuous Dynamics
A Temporal Difference Method for Stochastic Continuous Dynamics
Haruki Settai
Naoya Takeishi
Takehisa Yairi
492
0
0
21 May 2025
Sample and Computationally Efficient Continuous-Time Reinforcement Learning with General Function Approximation
Sample and Computationally Efficient Continuous-Time Reinforcement Learning with General Function ApproximationConference on Uncertainty in Artificial Intelligence (UAI), 2025
Runze Zhao
Yue Yu
Adams Yiyue Zhu
Chen Yang
Dongruo Zhou
193
1
0
20 May 2025
Optimal Control of Probabilistic Dynamics Models via Mean Hamiltonian Minimization
Optimal Control of Probabilistic Dynamics Models via Mean Hamiltonian Minimization
D. Leeftink
Çağatay Yıldız
Steffen Ridderbusch
Max Hinne
Marcel van Gerven
207
1
0
03 Apr 2025
Accuracy of Discretely Sampled Stochastic Policies in Continuous-time Reinforcement Learning
Accuracy of Discretely Sampled Stochastic Policies in Continuous-time Reinforcement Learning
Yanwei Jia
Du Ouyang
Yufei Zhang
281
9
0
13 Mar 2025
Tuning Frequency Bias of State Space Models
Tuning Frequency Bias of State Space ModelsInternational Conference on Learning Representations (ICLR), 2024
Annan Yu
Dongwei Lyu
Soon Hoe Lim
Michael W. Mahoney
N. Benjamin Erichson
274
12
0
02 Oct 2024
Model-based Policy Optimization using Symbolic World Model
Model-based Policy Optimization using Symbolic World Model
Andrey Gorodetskiy
Konstantin Mironov
Aleksandr I. Panov
228
3
0
18 Jul 2024
Physics-Informed Model and Hybrid Planning for Efficient Dyna-Style
  Reinforcement Learning
Physics-Informed Model and Hybrid Planning for Efficient Dyna-Style Reinforcement Learning
Zakariae El Asri
Olivier Sigaud
Nicolas Thome
183
1
0
02 Jul 2024
When to Sense and Control? A Time-adaptive Approach for Continuous-Time
  RL
When to Sense and Control? A Time-adaptive Approach for Continuous-Time RL
Lenart Treven
Bhavya Sukhija
Yarden As
Florian Dorfler
Andreas Krause
387
6
0
03 Jun 2024
Impact of Computation in Integral Reinforcement Learning for
  Continuous-Time Control
Impact of Computation in Integral Reinforcement Learning for Continuous-Time Control
Wenhan Cao
Wei Pan
189
1
0
27 Feb 2024
Data-driven optimal stopping: A pure exploration analysis
Data-driven optimal stopping: A pure exploration analysis
Soren Christensen
Niklas Dexheimer
Claudia Strauch
146
3
0
10 Dec 2023
Efficient Exploration in Continuous-time Model-based Reinforcement
  Learning
Efficient Exploration in Continuous-time Model-based Reinforcement LearningNeural Information Processing Systems (NeurIPS), 2023
Lenart Treven
Jonas Hübotter
Bhavya Sukhija
Florian Dorfler
Andreas Krause
215
16
0
30 Oct 2023
Robustifying State-space Models for Long Sequences via Approximate
  Diagonalization
Robustifying State-space Models for Long Sequences via Approximate DiagonalizationInternational Conference on Learning Representations (ICLR), 2023
Annan Yu
Arnur Nigmetov
Dmitriy Morozov
Michael W. Mahoney
N. Benjamin Erichson
223
15
0
02 Oct 2023
ODE-based Recurrent Model-free Reinforcement Learning for POMDPs
ODE-based Recurrent Model-free Reinforcement Learning for POMDPsNeural Information Processing Systems (NeurIPS), 2023
Xu Zhao
Duzhen Zhang
Liyuan Han
Tielin Zhang
Bo Xu
199
14
0
25 Sep 2023
Continuous-Time Reinforcement Learning: New Design Algorithms with
  Theoretical Insights and Performance Guarantees
Continuous-Time Reinforcement Learning: New Design Algorithms with Theoretical Insights and Performance GuaranteesIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2023
Brent A. Wallace
J. Si
193
4
0
18 Jul 2023
Learning Energy Conserving Dynamics Efficiently with Hamiltonian
  Gaussian Processes
Learning Energy Conserving Dynamics Efficiently with Hamiltonian Gaussian Processes
M. Ross
Markus Heinonen
119
4
0
03 Mar 2023
Neural Laplace Control for Continuous-time Delayed Systems
Neural Laplace Control for Continuous-time Delayed SystemsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2023
Samuel Holt
Alihan Huyuk
Zhaozhi Qian
Hao Sun
M. Schaar
OffRL
266
12
0
24 Feb 2023
Neural Optimal Control using Learned System Dynamics
Neural Optimal Control using Learned System DynamicsIEEE International Conference on Robotics and Automation (ICRA), 2023
Selim Engin
Volkan Isler
148
3
0
20 Feb 2023
CERiL: Continuous Event-based Reinforcement Learning
CERiL: Continuous Event-based Reinforcement LearningBritish Machine Vision Conference (BMVC), 2023
Celyn Walters
Simon Hadfield
OffRL
129
3
0
15 Feb 2023
Managing Temporal Resolution in Continuous Value Estimation: A
  Fundamental Trade-off
Managing Temporal Resolution in Continuous Value Estimation: A Fundamental Trade-offNeural Information Processing Systems (NeurIPS), 2022
Zichen Zhang
Johannes Kirschner
Junxi Zhang
Francesco Zanini
Alex Ayoub
Masood Dehghan
Dale Schuurmans
OffRL
284
3
0
17 Dec 2022
Dynamic Decision Frequency with Continuous Options
Dynamic Decision Frequency with Continuous OptionsIEEE/RJS International Conference on Intelligent RObots and Systems (IROS), 2022
Amir-Hossein Karimi
Jun Jin
Jun Luo
A. R. Mahmood
Martin Jägersand
Samuele Tosatto
260
10
0
06 Dec 2022
Neural ODEs as Feedback Policies for Nonlinear Optimal Control
Neural ODEs as Feedback Policies for Nonlinear Optimal ControlIFAC-PapersOnLine (IFAC-PapersOnLine), 2022
I. O. Sandoval
Panagiotis Petsagkourakis
Ehecatl Antonio del Rio Chanona
150
11
0
20 Oct 2022
Adaptive Asynchronous Control Using Meta-learned Neural Ordinary
  Differential Equations
Adaptive Asynchronous Control Using Meta-learned Neural Ordinary Differential EquationsIEEE Transactions on robotics (TRO), 2022
Achkan Salehi
Steffen Rühl
Stéphane Doncieux
AI4CE
336
3
0
25 Jul 2022
Two-Timescale Stochastic Approximation for Bilevel Optimisation Problems
  in Continuous-Time Models
Two-Timescale Stochastic Approximation for Bilevel Optimisation Problems in Continuous-Time Models
Louis Sharrock
175
2
0
14 Jun 2022
Neural Differential Equations for Learning to Program Neural Nets
  Through Continuous Learning Rules
Neural Differential Equations for Learning to Program Neural Nets Through Continuous Learning RulesNeural Information Processing Systems (NeurIPS), 2022
Kazuki Irie
Francesco Faccio
Jürgen Schmidhuber
AI4TS
253
17
0
03 Jun 2022
Learning Interacting Dynamical Systems with Latent Gaussian Process ODEs
Learning Interacting Dynamical Systems with Latent Gaussian Process ODEsNeural Information Processing Systems (NeurIPS), 2022
Çağatay Yıldız
M. Kandemir
Barbara Rakitsch
304
14
0
24 May 2022
Temporal Difference Learning with Continuous Time and State in the
  Stochastic Setting
Temporal Difference Learning with Continuous Time and State in the Stochastic Setting
Ziad Kobeissi
Francis R. Bach
OffRL
231
4
0
16 Feb 2022
Bellman Meets Hawkes: Model-Based Reinforcement Learning via Temporal
  Point Processes
Bellman Meets Hawkes: Model-Based Reinforcement Learning via Temporal Point ProcessesAAAI Conference on Artificial Intelligence (AAAI), 2022
Chao Qu
Jue Chen
Siqiao Xue
Xiaoming Shi
James Y. Zhang
Hongyuan Mei
OffRL
205
21
0
29 Jan 2022
Optimisation of Structured Neural Controller Based on Continuous-Time
  Policy Gradient
Optimisation of Structured Neural Controller Based on Continuous-Time Policy Gradient
Namhoon Cho
Hyo-Sang Shin
184
2
0
17 Jan 2022
Characteristic Neural Ordinary Differential Equations
Characteristic Neural Ordinary Differential EquationsInternational Conference on Learning Representations (ICLR), 2021
Xingzi Xu
Ali Hasan
Khalil Elkhalil
Jie Ding
Vahid Tarokh
BDL
338
3
0
25 Nov 2021
Policy Gradient and Actor-Critic Learning in Continuous Time and Space:
  Theory and Algorithms
Policy Gradient and Actor-Critic Learning in Continuous Time and Space: Theory and AlgorithmsJournal of machine learning research (JMLR), 2021
Yanwei Jia
X. Zhou
OffRL
307
116
0
22 Nov 2021
1