Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2009.07896
Cited By
Captum: A unified and generic model interpretability library for PyTorch
16 September 2020
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
Jonathan Reynolds
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Captum: A unified and generic model interpretability library for PyTorch"
50 / 365 papers shown
Title
Improving Health Mentioning Classification of Tweets using Contrastive Adversarial Training
Pervaiz Iqbal Khan
Shoaib Ahmed Siddiqui
Imran Razzak
Andreas Dengel
Sheraz Ahmed
13
3
0
03 Mar 2022
BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves Biomedical Machine Reading Comprehension Task
Maria Mahbub
S. Srinivasan
Edmon Begoli
Gregory D. Peterson
19
11
0
26 Feb 2022
Continuous Human Action Recognition for Human-Machine Interaction: A Review
Harshala Gammulle
David Ahmedt-Aristizabal
Simon Denman
Lachlan Tychsen-Smith
L. Petersson
Clinton Fookes
35
25
0
26 Feb 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
16
168
0
14 Feb 2022
InterpretTime: a new approach for the systematic evaluation of neural-network interpretability in time series classification
Hugues Turbé
Mina Bjelogrlic
Christian Lovis
G. Mengaldo
AI4TS
22
6
0
11 Feb 2022
A Graph Based Neural Network Approach to Immune Profiling of Multiplexed Tissue Samples
Natalia Garcia Martin
S. Malacrino
M. Wojciechowska
L. Campo
Helen Jones
...
Chris Holmes
K. Sirinukunwattana
H. Sailem
C. Verrill
J. Rittscher
9
5
0
01 Feb 2022
Visualizing Automatic Speech Recognition -- Means for a Better Understanding?
Karla Markert
Romain Parracone
Mykhailo Kulakov
Philip Sperl
Ching-yu Kao
Konstantin Böttinger
11
8
0
01 Feb 2022
Generalizability of Machine Learning Models: Quantitative Evaluation of Three Methodological Pitfalls
Farhad Maleki
K. Ovens
Rajiv Gupta
C. Reinhold
A. Spatz
Reza Forghani
36
67
0
01 Feb 2022
LAP: An Attention-Based Module for Concept Based Self-Interpretation and Knowledge Injection in Convolutional Neural Networks
Rassa Ghavami Modegh
Ahmadali Salimi
Alireza Dizaji
Hamid R. Rabiee
FAtt
27
0
0
27 Jan 2022
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Alon Jacovi
Jasmijn Bastings
Sebastian Gehrmann
Yoav Goldberg
Katja Filippova
28
15
0
27 Jan 2022
More Than Words: Towards Better Quality Interpretations of Text Classifiers
Muhammad Bilal Zafar
Philipp Schmidt
Michele Donini
Cédric Archambeau
F. Biessmann
Sanjiv Ranjan Das
K. Kenthapadi
FAtt
4
5
0
23 Dec 2021
Interpretable and Interactive Deep Multiple Instance Learning for Dental Caries Classification in Bitewing X-rays
Benjamin Bergner
Csaba Rohrer
Aiham Taleb
Martha Duchrau
Guilherme De Leon
J. A. Rodrigues
F. Schwendicke
J. Krois
C. Lippert
20
1
0
17 Dec 2021
UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
Aaron Chan
Maziar Sanjabi
Lambert Mathias
L Tan
Shaoliang Nie
Xiaochang Peng
Xiang Ren
Hamed Firooz
36
41
0
16 Dec 2021
Visualising and Explaining Deep Learning Models for Speech Quality Prediction
H. Tilkorn
Gabriel Mittag
Usability Lab TU Berlin
11
0
0
12 Dec 2021
Rethinking the Authorship Verification Experimental Setups
Florin Brad
Andrei Manolache
Elena Burceanu
Antonio Bărbălău
Radu Tudor Ionescu
Marius Popescu
9
4
0
09 Dec 2021
Predicting the Travel Distance of Patients to Access Healthcare using Deep Neural Networks
Lichin Chen
J. Sheu
Yuh-Jue Chuang
Yu Tsao
9
4
0
07 Dec 2021
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
S. Feizi
FAtt
16
79
0
29 Nov 2021
Picasso: Model-free Feature Visualization
Binh Vu
Igor L. Markov
FAtt
VLM
9
1
0
24 Nov 2021
Inferring halo masses with Graph Neural Networks
Pablo Villanueva-Domingo
F. Villaescusa-Navarro
D. Anglés-Alcázar
S. Genel
F. Marinacci
D. Spergel
L. Hernquist
M. Vogelsberger
R. Davé
D. Narayanan
AI4CE
15
35
0
16 Nov 2021
A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines
V. Borisov
Johannes Meier
J. V. D. Heuvel
Hamed Jalali
Gjergji Kasneci
FAtt
31
5
0
14 Nov 2021
A Practical guide on Explainable AI Techniques applied on Biomedical use case applications
Adrien Bennetot
Ivan Donadello
Ayoub El Qadi
M. Dragoni
Thomas Frossard
...
M. Trocan
Raja Chatila
Andreas Holzinger
Artur Garcez
Natalia Díaz Rodríguez
XAI
24
7
0
13 Nov 2021
Defining and Quantifying the Emergence of Sparse Concepts in DNNs
J. Ren
Mingjie Li
Qirui Chen
Huiqi Deng
Quanshi Zhang
12
31
0
11 Nov 2021
On the Effects of Artificial Data Modification
Antonia Marcu
Adam Prugel-Bennett
6
3
0
26 Oct 2021
Revealing unforeseen diagnostic image features with deep learning by detecting cardiovascular diseases from apical four-chamber ultrasounds
Li-Hsin Cheng
Pablo Bosch
Rutger F. H. Hofman
Timo B. Brakenhoff
E. F. Bruggemans
R. Geest
E. Holman
14
4
0
25 Oct 2021
Multi-concept adversarial attacks
Vibha Belavadi
Yan Zhou
Murat Kantarcioglu
B. Thuraisingham
AAML
28
0
0
19 Oct 2021
Making Corgis Important for Honeycomb Classification: Adversarial Attacks on Concept-based Explainability Tools
Davis Brown
Henry Kvinge
AAML
37
7
0
14 Oct 2021
AutoNLU: Detecting, root-causing, and fixing NLU model errors
P. Sethi
Denis Savenkov
Forough Arabshahi
Jack Goetz
Micaela Tolliver
Nicolas Scheffer
I. Kabul
Yue Liu
Ahmed Aly
13
4
0
12 Oct 2021
Focus! Rating XAI Methods and Finding Biases
Anna Arias-Duart
Ferran Parés
Dario Garcia-Gasulla
Víctor Giménez-Ábalos
13
32
0
28 Sep 2021
MIIDL: a Python package for microbial biomarkers identification powered by interpretable deep learning
Jian Jiang
17
0
0
24 Sep 2021
Explainability Requires Interactivity
Matthias Kirchler
M. Graf
Marius Kloft
C. Lippert
FAtt
AAML
HAI
27
1
0
16 Sep 2021
Does BERT Learn as Humans Perceive? Understanding Linguistic Styles through Lexica
Shirley Anugrah Hayati
Dongyeop Kang
Lyle Ungar
19
33
0
06 Sep 2021
A Generative Approach for Mitigating Structural Biases in Natural Language Inference
Dimion Asael
Zachary M. Ziegler
Yonatan Belinkov
6
8
0
31 Aug 2021
Thermostat: A Large Collection of NLP Model Explanations and Analysis Tools
Nils Feldhus
Robert Schwarzenberg
Sebastian Möller
8
14
0
31 Aug 2021
Explaining Classes through Word Attribution
Samuel Rönnqvist
A. Myntti
Aki-Juhani Kyröläinen
S. Pyysalo
Veronika Laippala
Filip Ginter
FAtt
11
0
0
31 Aug 2021
Speaker-Conditioned Hierarchical Modeling for Automated Speech Scoring
Yaman Kumar Singla
Avykat Gupta
Shaurya Bagga
Changyou Chen
Balaji Krishnamurthy
R. Shah
21
12
0
30 Aug 2021
Multilingual Multi-Aspect Explainability Analyses on Machine Reading Comprehension Models
Yiming Cui
Weinan Zhang
Wanxiang Che
Ting Liu
Zhigang Chen
Shijin Wang
LRM
9
9
0
26 Aug 2021
Challenges for cognitive decoding using deep learning methods
A. Thomas
Christopher Ré
R. Poldrack
AI4CE
16
6
0
16 Aug 2021
Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing
Sanchit Sinha
Hanjie Chen
Arshdeep Sekhon
Yangfeng Ji
Yanjun Qi
AAML
FAtt
12
42
0
11 Aug 2021
Pan-Cancer Integrative Histology-Genomic Analysis via Interpretable Multimodal Deep Learning
Richard J. Chen
Ming Y. Lu
Drew F. K. Williamson
Tiffany Y. Chen
Jana Lipkova
...
Maha Shady
Mane Williams
Bumjin Joo
Zahra Noor
Faisal Mahmood
20
13
0
04 Aug 2021
Temporal Dependencies in Feature Importance for Time Series Predictions
Kin Kwan Leung
Clayton Rooke
Jonathan Smith
S. Zuberi
M. Volkovs
OOD
AI4TS
23
24
0
29 Jul 2021
Shared Interest: Measuring Human-AI Alignment to Identify Recurring Patterns in Model Behavior
Angie Boggust
Benjamin Hoover
Arvindmani Satyanarayan
Hendrik Strobelt
24
50
0
20 Jul 2021
Quantifying Explainability in NLP and Analyzing Algorithms for Performance-Explainability Tradeoff
Michael J. Naylor
C. French
Samantha R. Terker
Uday Kamath
36
10
0
12 Jul 2021
A Review of Bangla Natural Language Processing Tasks and the Utility of Transformer Models
Firoj Alam
Md. Arid Hasan
Tanvirul Alam
A. Khan
Janntatul Tajrin
Naira Khan
Shammur A. Chowdhury
LM&MA
14
23
0
08 Jul 2021
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Christopher J. Anders
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
27
64
0
24 Jun 2021
Using Integrated Gradients and Constituency Parse Trees to explain Linguistic Acceptability learnt by BERT
Anmol Nayak
Hariprasad Timmapathini
19
4
0
01 Jun 2021
Memory Wrap: a Data-Efficient and Interpretable Extension to Image Classification Models
B. La Rosa
Roberto Capobianco
Daniele Nardi
VLM
17
9
0
01 Jun 2021
Fine-grained Interpretation and Causation Analysis in Deep NLP Models
Hassan Sajjad
Narine Kokhlikyan
Fahim Dalvi
Nadir Durrani
MILM
25
8
0
17 May 2021
DEEMD: Drug Efficacy Estimation against SARS-CoV-2 based on cell Morphology with Deep multiple instance learning
M. Saberian
Kathleen P. Moriarty
A. Olmstead
Christian Hallgrimson
Franccois Jean
I. Nabi
Maxwell W. Libbrecht
Ghassan Hamarneh
21
12
0
10 May 2021
Towards Benchmarking the Utility of Explanations for Model Debugging
Maximilian Idahl
Lijun Lyu
U. Gadiraju
Avishek Anand
XAI
13
18
0
10 May 2021
Do Concept Bottleneck Models Learn as Intended?
Andrei Margeloiu
Matthew Ashman
Umang Bhatt
Yanzhi Chen
M. Jamnik
Adrian Weller
SLR
6
91
0
10 May 2021
Previous
1
2
3
4
5
6
7
8
Next