ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.07896
  4. Cited By
Captum: A unified and generic model interpretability library for PyTorch

Captum: A unified and generic model interpretability library for PyTorch

16 September 2020
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
Jonathan Reynolds
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Captum: A unified and generic model interpretability library for PyTorch"

24 / 424 papers shown
Title
Quantifying Explainability in NLP and Analyzing Algorithms for
  Performance-Explainability Tradeoff
Quantifying Explainability in NLP and Analyzing Algorithms for Performance-Explainability Tradeoff
Michael J. Naylor
C. French
Samantha R. Terker
Uday Kamath
150
10
0
12 Jul 2021
A Review of Bangla Natural Language Processing Tasks and the Utility of
  Transformer Models
A Review of Bangla Natural Language Processing Tasks and the Utility of Transformer Models
Firoj Alam
Md. Arid Hasan
Tanvirul Alam
A. Khan
Janntatul Tajrin
Naira Khan
Shammur A. Chowdhury
LM&MA
156
32
0
08 Jul 2021
Software for Dataset-wide XAI: From Local Explanations to Global
  Insights with Zennit, CoRelAy, and ViRelAy
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Christopher J. Anders
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
212
83
0
24 Jun 2021
Using Integrated Gradients and Constituency Parse Trees to explain
  Linguistic Acceptability learnt by BERT
Using Integrated Gradients and Constituency Parse Trees to explain Linguistic Acceptability learnt by BERTICON (ICON), 2021
Anmol Nayak
Hariprasad Timmapathini
169
5
0
01 Jun 2021
Memory Wrap: a Data-Efficient and Interpretable Extension to Image
  Classification Models
Memory Wrap: a Data-Efficient and Interpretable Extension to Image Classification Models
B. La Rosa
Roberto Capobianco
Daniele Nardi
VLM
134
10
0
01 Jun 2021
Fine-grained Interpretation and Causation Analysis in Deep NLP Models
Fine-grained Interpretation and Causation Analysis in Deep NLP ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021
Hassan Sajjad
Narine Kokhlikyan
Fahim Dalvi
Nadir Durrani
MILM
277
8
0
17 May 2021
DEEMD: Drug Efficacy Estimation against SARS-CoV-2 based on cell
  Morphology with Deep multiple instance learning
DEEMD: Drug Efficacy Estimation against SARS-CoV-2 based on cell Morphology with Deep multiple instance learningIEEE Transactions on Medical Imaging (IEEE TMI), 2021
M. Saberian
Kathleen P. Moriarty
A. Olmstead
Christian Hallgrimson
Franccois Jean
I. Nabi
Maxwell W. Libbrecht
Ghassan Hamarneh
211
14
0
10 May 2021
Towards Benchmarking the Utility of Explanations for Model Debugging
Towards Benchmarking the Utility of Explanations for Model Debugging
Maximilian Idahl
Lijun Lyu
U. Gadiraju
Avishek Anand
XAI
139
19
0
10 May 2021
Do Concept Bottleneck Models Learn as Intended?
Do Concept Bottleneck Models Learn as Intended?
Andrei Margeloiu
Matthew Ashman
Umang Bhatt
Yanzhi Chen
M. Jamnik
Adrian Weller
SLR
177
109
0
10 May 2021
Improving Attribution Methods by Learning Submodular Functions
Improving Attribution Methods by Learning Submodular FunctionsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2021
Piyushi Manupriya
Tarun Ram Menta
S. Jagarlapudi
V. Balasubramanian
TDI
265
6
0
19 Apr 2021
Human-Imitating Metrics for Training and Evaluating Privacy Preserving
  Emotion Recognition Models Using Sociolinguistic Knowledge
Human-Imitating Metrics for Training and Evaluating Privacy Preserving Emotion Recognition Models Using Sociolinguistic Knowledge
Mimansa Jaiswal
E. Provost
155
0
0
18 Apr 2021
Evaluating explainable artificial intelligence methods for multi-label
  deep learning classification tasks in remote sensing
Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensingInternational Journal of Applied Earth Observation and Geoinformation (JAEOG), 2021
Ioannis Kakogeorgiou
Konstantinos Karantzalos
XAI
168
140
0
03 Apr 2021
SYSML: StYlometry with Structure and Multitask Learning: Implications
  for Darknet Forum Migrant Analysis
SYSML: StYlometry with Structure and Multitask Learning: Implications for Darknet Forum Migrant AnalysisConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Pranav Maneriker
Yuntian He
Srinivas Parthasarathy
123
12
0
01 Apr 2021
Efficient Explanations from Empirical Explainers
Efficient Explanations from Empirical ExplainersBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackBoxNLP), 2021
Robert Schwarzenberg
Nils Feldhus
Sebastian Möller
FAtt
252
9
0
29 Mar 2021
Robust Models Are More Interpretable Because Attributions Look Normal
Robust Models Are More Interpretable Because Attributions Look NormalInternational Conference on Machine Learning (ICML), 2021
Zifan Wang
Matt Fredrikson
Anupam Datta
OODFAtt
278
31
0
20 Mar 2021
LRG at SemEval-2021 Task 4: Improving Reading Comprehension with
  Abstract Words using Augmentation, Linguistic Features and Voting
LRG at SemEval-2021 Task 4: Improving Reading Comprehension with Abstract Words using Augmentation, Linguistic Features and VotingInternational Workshop on Semantic Evaluation (SemEval), 2021
Abheesht Sharma
Harshit Pandey
Gunjan Chhablani
Yash Bhartia
T. Dash
94
1
0
24 Feb 2021
NLRG at SemEval-2021 Task 5: Toxic Spans Detection Leveraging BERT-based
  Token Classification and Span Prediction Techniques
NLRG at SemEval-2021 Task 5: Toxic Spans Detection Leveraging BERT-based Token Classification and Span Prediction TechniquesInternational Workshop on Semantic Evaluation (SemEval), 2021
Gunjan Chhablani
Abheesht Sharma
Harshit Pandey
Yash Bhartia
S. Suthaharan
91
14
0
24 Feb 2021
Gifsplanation via Latent Shift: A Simple Autoencoder Approach to
  Counterfactual Generation for Chest X-rays
Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Counterfactual Generation for Chest X-rays
Joseph Paul Cohen
Rupert Brooks
Sovann En
Evan Zucker
Anuj Pareek
M. Lungren
Akshay S. Chaudhari
FAttMedIm
180
4
0
18 Feb 2021
Attribution Mask: Filtering Out Irrelevant Features By Recursively
  Focusing Attention on Inputs of DNNs
Attribution Mask: Filtering Out Irrelevant Features By Recursively Focusing Attention on Inputs of DNNs
Jaehwan Lee
Joon‐Hyuk Chang
TDIFAtt
174
0
0
15 Feb 2021
Detecting Trojaned DNNs Using Counterfactual Attributions
Detecting Trojaned DNNs Using Counterfactual AttributionsInternational Conference on Applied Algorithms (ICAA), 2020
Karan Sikka
Indranil Sur
Susmit Jha
Anirban Roy
Ajay Divakaran
AAML
154
13
0
03 Dec 2020
diagNNose: A Library for Neural Activation Analysis
diagNNose: A Library for Neural Activation AnalysisBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2020
Jaap Jumelet
AI4CE
116
9
0
13 Nov 2020
Bangla Text Classification using Transformers
Bangla Text Classification using Transformers
Tanvirul Alam
A. Khan
Firoj Alam
140
43
0
09 Nov 2020
Interpretation of NLP models through input marginalization
Interpretation of NLP models through input marginalizationConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Siwon Kim
Jihun Yi
Eunji Kim
Sungroh Yoon
MILMFAtt
195
63
0
27 Oct 2020
Exploration of Interpretability Techniques for Deep COVID-19
  Classification using Chest X-ray Images
Exploration of Interpretability Techniques for Deep COVID-19 Classification using Chest X-ray ImagesJournal of Imaging (JI), 2020
S. Chatterjee
Fatima Saad
Chompunuch Sarasaen
Suhita Ghosh
Valerie Krug
...
Petia Radeva
G. Rose
Sebastian Stober
Oliver Speck
A. Nürnberger
256
28
0
03 Jun 2020
Previous
123456789