ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.07073
  4. Cited By
Probabilistic Guarantees for Safe Deep Reinforcement Learning

Probabilistic Guarantees for Safe Deep Reinforcement Learning

14 May 2020
E. Bacci
David Parker
ArXivPDFHTML

Papers citing "Probabilistic Guarantees for Safe Deep Reinforcement Learning"

4 / 4 papers shown
Title
Taming Reachability Analysis of DNN-Controlled Systems via
  Abstraction-Based Training
Taming Reachability Analysis of DNN-Controlled Systems via Abstraction-Based Training
Jiaxu Tian
Dapeng Zhi
Si Liu
Peixin Wang
Guy Katz
M. Zhang
24
1
0
21 Nov 2022
How to Certify Machine Learning Based Safety-critical Systems? A
  Systematic Literature Review
How to Certify Machine Learning Based Safety-critical Systems? A Systematic Literature Review
Florian Tambon
Gabriel Laberge
Le An
Amin Nikanjam
Paulina Stevia Nouwou Mindom
Y. Pequignot
Foutse Khomh
G. Antoniol
E. Merlo
François Laviolette
30
66
0
26 Jul 2021
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
240
1,837
0
03 Feb 2017
Safety Verification of Deep Neural Networks
Safety Verification of Deep Neural Networks
Xiaowei Huang
M. Kwiatkowska
Sen Wang
Min Wu
AAML
180
932
0
21 Oct 2016
1