Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1907.10662
Cited By
ART: Abstraction Refinement-Guided Training for Provably Correct Neural Networks
17 July 2019
Xuankang Lin
He Zhu
R. Samanta
Suresh Jagannathan
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ART: Abstraction Refinement-Guided Training for Provably Correct Neural Networks"
6 / 6 papers shown
Title
Provably-Safe Neural Network Training Using Hybrid Zonotope Reachability Analysis
Long Kiu Chung
Shreyas Kousik
85
0
0
22 Jan 2025
DeepSaDe: Learning Neural Networks that Guarantee Domain Constraint Satisfaction
Kshitij Goyal
Sebastijan Dumancic
Hendrik Blockeel
16
2
0
02 Mar 2023
Failing with Grace: Learning Neural Network Controllers that are Boundedly Unsafe
Panagiotis Vlantis
Leila J. Bridgeman
Michael M. Zavlanos
15
0
0
22 Jun 2021
Scalable Synthesis of Verified Controllers in Deep Reinforcement Learning
Zikang Xiong
Suresh Jagannathan
11
6
0
20 Apr 2021
Robustness Verification for Transformers
Zhouxing Shi
Huan Zhang
Kai-Wei Chang
Minlie Huang
Cho-Jui Hsieh
AAML
4
103
0
16 Feb 2020
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
222
1,818
0
03 Feb 2017
1