Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2202.05302
Cited By
Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient
10 February 2022
Max W. Shen
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient"
17 / 17 papers shown
Title
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
M. Zarlenga
Gabriele Dominici
Pietro Barbiero
Z. Shams
M. Jamnik
KELM
108
0
0
24 Apr 2025
Applications of Generative AI (GAI) for Mobile and Wireless Networking: A Survey
Thai-Hoc Vu
Senthil Kumar Jagatheesaperumal
Minh-Duong Nguyen
Nguyen Van Huynh
Sunghwan Kim
Viet Quoc Pham
29
8
0
30 May 2024
Understanding Inter-Concept Relationships in Concept-Based Models
Naveen Raman
M. Zarlenga
M. Jamnik
22
4
0
28 May 2024
On the Relationship Between Interpretability and Explainability in Machine Learning
Benjamin Leblanc
Pascal Germain
FaML
24
0
0
20 Nov 2023
A Framework for Interpretability in Machine Learning for Medical Imaging
Alan Q. Wang
Batuhan K. Karaman
Heejong Kim
Jacob Rosenthal
Rachit Saluja
Sean I. Young
M. Sabuncu
AI4CE
11
10
0
02 Oct 2023
SHARCS: Shared Concept Space for Explainable Multimodal Learning
Gabriele Dominici
Pietro Barbiero
Lucie Charlotte Magister
Pietro Lio'
Nikola Simidjievski
15
4
0
01 Jul 2023
Interpretable Neural-Symbolic Concept Reasoning
Pietro Barbiero
Gabriele Ciravegna
Francesco Giannini
M. Zarlenga
Lucie Charlotte Magister
Alberto Tonda
Pietro Lio'
F. Precioso
M. Jamnik
G. Marra
NAI
LRM
56
38
0
27 Apr 2023
Combining Stochastic Explainers and Subgraph Neural Networks can Increase Expressivity and Interpretability
Indro Spinelli
Michele Guerra
F. Bianchi
Simone Scardapane
25
0
0
14 Apr 2023
A.I. Robustness: a Human-Centered Perspective on Technological Challenges and Opportunities
Andrea Tocchetti
Lorenzo Corti
Agathe Balayn
Mireia Yurrita
Philip Lippmann
Marco Brambilla
Jie-jin Yang
19
10
0
17 Oct 2022
Requirements Engineering for Machine Learning: A Review and Reflection
Zhong Pei
Lin Liu
Chen Wang
Jianmin Wang
VLM
26
22
0
03 Oct 2022
Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off
M. Zarlenga
Pietro Barbiero
Gabriele Ciravegna
G. Marra
Francesco Giannini
...
F. Precioso
S. Melacci
Adrian Weller
Pietro Lio'
M. Jamnik
71
52
0
19 Sep 2022
Encoding Concepts in Graph Neural Networks
Lucie Charlotte Magister
Pietro Barbiero
Dmitry Kazhdan
F. Siciliano
Gabriele Ciravegna
Fabrizio Silvestri
M. Jamnik
Pietro Lio'
14
21
0
27 Jul 2022
Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges
M. Bronstein
Joan Bruna
Taco S. Cohen
Petar Velivcković
GNN
172
1,100
0
27 Apr 2021
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Alon Jacovi
Ana Marasović
Tim Miller
Yoav Goldberg
244
422
0
15 Oct 2020
Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems
J. Willard
X. Jia
Shaoming Xu
M. Steinbach
Vipin Kumar
AI4CE
83
387
0
10 Mar 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
243
11,659
0
09 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,672
0
28 Feb 2017
1