ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.07896
  4. Cited By
Captum: A unified and generic model interpretability library for PyTorch

Captum: A unified and generic model interpretability library for PyTorch

16 September 2020
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
Jonathan Reynolds
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
    FAtt
ArXivPDFHTML

Papers citing "Captum: A unified and generic model interpretability library for PyTorch"

15 / 365 papers shown
Title
Improving Attribution Methods by Learning Submodular Functions
Improving Attribution Methods by Learning Submodular Functions
Piyushi Manupriya
Tarun Ram Menta
S. Jagarlapudi
V. Balasubramanian
TDI
22
6
0
19 Apr 2021
Human-Imitating Metrics for Training and Evaluating Privacy Preserving
  Emotion Recognition Models Using Sociolinguistic Knowledge
Human-Imitating Metrics for Training and Evaluating Privacy Preserving Emotion Recognition Models Using Sociolinguistic Knowledge
Mimansa Jaiswal
E. Provost
23
0
0
18 Apr 2021
Evaluating explainable artificial intelligence methods for multi-label
  deep learning classification tasks in remote sensing
Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing
Ioannis Kakogeorgiou
Konstantinos Karantzalos
XAI
23
118
0
03 Apr 2021
SYSML: StYlometry with Structure and Multitask Learning: Implications
  for Darknet Forum Migrant Analysis
SYSML: StYlometry with Structure and Multitask Learning: Implications for Darknet Forum Migrant Analysis
Pranav Maneriker
Yuntian He
Srinivas Parthasarathy
26
9
0
01 Apr 2021
Efficient Explanations from Empirical Explainers
Efficient Explanations from Empirical Explainers
Robert Schwarzenberg
Nils Feldhus
Sebastian Möller
FAtt
27
9
0
29 Mar 2021
Robust Models Are More Interpretable Because Attributions Look Normal
Robust Models Are More Interpretable Because Attributions Look Normal
Zifan Wang
Matt Fredrikson
Anupam Datta
OOD
FAtt
28
25
0
20 Mar 2021
LRG at SemEval-2021 Task 4: Improving Reading Comprehension with
  Abstract Words using Augmentation, Linguistic Features and Voting
LRG at SemEval-2021 Task 4: Improving Reading Comprehension with Abstract Words using Augmentation, Linguistic Features and Voting
Abheesht Sharma
Harshit Pandey
Gunjan Chhablani
Yash Bhartia
T. Dash
6
1
0
24 Feb 2021
NLRG at SemEval-2021 Task 5: Toxic Spans Detection Leveraging BERT-based
  Token Classification and Span Prediction Techniques
NLRG at SemEval-2021 Task 5: Toxic Spans Detection Leveraging BERT-based Token Classification and Span Prediction Techniques
Gunjan Chhablani
Abheesht Sharma
Harshit Pandey
Yash Bhartia
S. Suthaharan
6
14
0
24 Feb 2021
Gifsplanation via Latent Shift: A Simple Autoencoder Approach to
  Counterfactual Generation for Chest X-rays
Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Counterfactual Generation for Chest X-rays
Joseph Paul Cohen
Rupert Brooks
Sovann En
Evan Zucker
Anuj Pareek
M. Lungren
Akshay S. Chaudhari
FAtt
MedIm
29
3
0
18 Feb 2021
Attribution Mask: Filtering Out Irrelevant Features By Recursively
  Focusing Attention on Inputs of DNNs
Attribution Mask: Filtering Out Irrelevant Features By Recursively Focusing Attention on Inputs of DNNs
Jaehwan Lee
Joon‐Hyuk Chang
TDI
FAtt
17
0
0
15 Feb 2021
Detecting Trojaned DNNs Using Counterfactual Attributions
Detecting Trojaned DNNs Using Counterfactual Attributions
Karan Sikka
Indranil Sur
Susmit Jha
Anirban Roy
Ajay Divakaran
AAML
9
12
0
03 Dec 2020
diagNNose: A Library for Neural Activation Analysis
diagNNose: A Library for Neural Activation Analysis
Jaap Jumelet
AI4CE
9
9
0
13 Nov 2020
Bangla Text Classification using Transformers
Bangla Text Classification using Transformers
Tanvirul Alam
A. Khan
Firoj Alam
15
34
0
09 Nov 2020
Interpretation of NLP models through input marginalization
Interpretation of NLP models through input marginalization
Siwon Kim
Jihun Yi
Eunji Kim
Sungroh Yoon
MILM
FAtt
14
58
0
27 Oct 2020
Exploration of Interpretability Techniques for Deep COVID-19
  Classification using Chest X-ray Images
Exploration of Interpretability Techniques for Deep COVID-19 Classification using Chest X-ray Images
S. Chatterjee
Fatima Saad
Chompunuch Sarasaen
Suhita Ghosh
Valerie Krug
...
P. Radeva
G. Rose
Sebastian Stober
Oliver Speck
A. Nürnberger
19
25
0
03 Jun 2020
Previous
12345678