ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.04763
  4. Cited By
On the Forward Invariance of Neural ODEs
v1v2 (latest)

On the Forward Invariance of Neural ODEs

International Conference on Machine Learning (ICML), 2022
10 October 2022
Wei Xiao
Tsun-Hsuan Wang
Ramin Hasani
Mathias Lechner
Yutong Ban
Chuang Gan
Daniela Rus
ArXiv (abs)PDFHTMLGithub

Papers citing "On the Forward Invariance of Neural ODEs"

8 / 8 papers shown
Certified Robust Invariant Polytope Training in Neural Controlled ODEs
Certified Robust Invariant Polytope Training in Neural Controlled ODEs
Akash Harapanahalli
Samuel Coogan
354
6
0
02 Aug 2024
ABNet: Attention BarrierNet for Safe and Scalable Robot Learning
ABNet: Attention BarrierNet for Safe and Scalable Robot Learning
Wei Xiao
Tsun-Hsuan Wang
Daniela Rus
OffRL
311
2
0
18 Jun 2024
KirchhoffNet: A Scalable Ultra Fast Analog Neural Network
KirchhoffNet: A Scalable Ultra Fast Analog Neural NetworkInternational Conference on Computer Aided Design (ICCAD), 2023
Zhengqi Gao
Fan-Keng Sun
Ron Rohrer
Duane S. Boning
406
1
0
24 Oct 2023
Safe Neural Control for Non-Affine Control Systems with Differentiable
  Control Barrier Functions
Safe Neural Control for Non-Affine Control Systems with Differentiable Control Barrier FunctionsIEEE Conference on Decision and Control (CDC), 2023
Wei Xiao
R. Allen
Daniela Rus
186
3
0
06 Sep 2023
SafeDiffuser: Safe Planning with Diffusion Probabilistic Models
SafeDiffuser: Safe Planning with Diffusion Probabilistic ModelsInternational Conference on Learning Representations (ICLR), 2023
Wei Xiao
Tsun-Hsuan Wang
Chuang Gan
Daniela Rus
DiffM
291
73
0
31 May 2023
FI-ODE: Certifiably Robust Forward Invariance in Neural ODEs
FI-ODE: Certifiably Robust Forward Invariance in Neural ODEs
Yujia Huang
I. D. Rodriguez
Huan Zhang
Yuanyuan Shi
Yisong Yue
571
3
0
30 Oct 2022
Interpreting Neural Policies with Disentangled Tree Representations
Interpreting Neural Policies with Disentangled Tree Representations
Tsun-Hsuan Wang
Wei Xiao
Tim Seyde
Ramin Hasani
Daniela Rus
DRL
317
2
0
13 Oct 2022
ShieldNN: A Provably Safe NN Filter for Unsafe NN Controllers
ShieldNN: A Provably Safe NN Filter for Unsafe NN Controllers
James Ferlez
Mahmoud M. Elnaggar
Yasser Shoukry
C. Fleming
AAML
397
35
0
16 Jun 2020
1
Page 1 of 1