Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1908.06214
Cited By
v1
v2 (latest)
Computing Linear Restrictions of Neural Networks
Neural Information Processing Systems (NeurIPS), 2019
17 August 2019
Matthew Sotoudeh
Aditya V. Thakur
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Computing Linear Restrictions of Neural Networks"
15 / 15 papers shown
Riemann Sum Optimization for Accurate Integrated Gradients Computation
Swadesh Swain
Shree Singhi
395
2
0
05 Oct 2024
Interpretable Computer Vision Models through Adversarial Training: Unveiling the Robustness-Interpretability Connection
Delyan Boychev
AAML
201
1
0
04 Jul 2023
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation
Neural Information Processing Systems (NeurIPS), 2023
Thomas Fel
Victor Boutin
Mazda Moayeri
Rémi Cadène
Louis Bethune
Léo Andéol
Mathieu Chalvidal
Thomas Serre
FAtt
447
94
0
11 Jun 2023
Precise and Generalized Robustness Certification for Neural Networks
USENIX Security Symposium (USENIX Security), 2023
Yuanyuan Yuan
Shuai Wang
Z. Su
AAML
221
5
0
11 Jun 2023
Incremental Verification of Neural Networks
Shubham Ugare
Debangshu Banerjee
Sasa Misailovic
Gagandeep Singh
373
21
0
04 Apr 2023
Non-Uniform Interpolation in Integrated Gradients for Low-Latency Explainable-AI
International Symposium on Circuits and Systems (ISCAS), 2023
Ashwin Bhat
A. Raychowdhury
332
6
0
22 Feb 2023
Verifying Attention Robustness of Deep Neural Networks against Semantic Perturbations
Asia-Pacific Software Engineering Conference (APSEC), 2022
S. Munakata
Caterina Urban
Haruki Yokoyama
Koji Yamamoto
Kazuki Munakata
AAML
139
6
0
13 Jul 2022
Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure
Neural Information Processing Systems (NeurIPS), 2022
Paul Novello
Thomas Fel
David Vigouroux
FAtt
467
40
0
13 Jun 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Computer Vision and Pattern Recognition (CVPR), 2022
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
282
52
0
15 Feb 2022
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods
Neural Information Processing Systems (NeurIPS), 2021
Julien Colin
Thomas Fel
Rémi Cadène
Thomas Serre
394
129
0
06 Dec 2021
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
Thomas Fel
Rémi Cadène
Mathieu Chalvidal
Matthieu Cord
David Vigouroux
Thomas Serre
MLAU
FAtt
AAML
302
86
0
07 Nov 2021
Provable Repair of Deep Neural Networks
ACM-SIGPLAN Symposium on Programming Language Design and Implementation (PLDI), 2021
Matthew Sotoudeh
Aditya V. Thakur
AAML
488
84
0
09 Apr 2021
SyReNN: A Tool for Analyzing Deep Neural Networks
International Journal on Software Tools for Technology Transfer (STTT) (STTT), 2021
Matthew Sotoudeh
Aditya V. Thakur
AAML
GNN
161
17
0
09 Jan 2021
How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural Networks
IEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2020
Thomas Fel
David Vigouroux
Rémi Cadène
Thomas Serre
XAI
FAtt
502
36
0
07 Sep 2020
Robustness Certification of Generative Models
M. Mirman
Timon Gehr
Martin Vechev
AAML
194
24
0
30 Apr 2020
1
Page 1 of 1