ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.07896
  4. Cited By
Captum: A unified and generic model interpretability library for PyTorch

Captum: A unified and generic model interpretability library for PyTorch

16 September 2020
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
Jonathan Reynolds
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Captum: A unified and generic model interpretability library for PyTorch"

50 / 425 papers shown
Delta-XAI: A Unified Framework for Explaining Prediction Changes in Online Time Series Monitoring
Delta-XAI: A Unified Framework for Explaining Prediction Changes in Online Time Series Monitoring
Changhun Kim
Yechan Mun
Hyeongwon Jang
Eunseo Lee
Sangchul Hahn
Eunho Yang
AI4TS
120
0
0
28 Nov 2025
The Directed Prediction Change - Efficient and Trustworthy Fidelity Assessment for Local Feature Attribution Methods
The Directed Prediction Change - Efficient and Trustworthy Fidelity Assessment for Local Feature Attribution Methods
Kevin Iselborn
David Dembinsky
Adriano Lucieri
Andreas Dengel
AAMLFAtt
426
0
0
26 Nov 2025
A Research and Development Portfolio of GNN Centric Malware Detection, Explainability, and Dataset Curation
A Research and Development Portfolio of GNN Centric Malware Detection, Explainability, and Dataset Curation
Hossein Shokouhinejad
Griffin Higgins
Roozbeh Razavi-Far
Ali Ghorbani
85
0
0
25 Nov 2025
Generation, Evaluation, and Explanation of Novelists' Styles with Single-Token Prompts
Generation, Evaluation, and Explanation of Novelists' Styles with Single-Token Prompts
Mosab Rezaei
Mina Rajaei Moghadam
A. Shaikh
Hamed Alhoori
Reva Freedman
121
0
0
25 Nov 2025
FinTRec: Transformer Based Unified Contextual Ads Targeting and Personalization for Financial Applications
FinTRec: Transformer Based Unified Contextual Ads Targeting and Personalization for Financial Applications
Dwipam Katariya
Snehita Varma
Akshat Shreemali
Benjamin Wu
Kalanand Mishra
Pranab Mohanty
83
0
0
18 Nov 2025
From Confusion to Clarity: ProtoScore - A Framework for Evaluating Prototype-Based XAI
From Confusion to Clarity: ProtoScore - A Framework for Evaluating Prototype-Based XAIConference on Fairness, Accountability and Transparency (FAccT), 2025
Helena Monke
Benjamin Sae-Chew
Benjamin Frész
Marco F. Huber
129
0
0
11 Nov 2025
Beyond Neural Incompatibility: Easing Cross-Scale Knowledge Transfer in Large Language Models through Latent Semantic Alignment
Beyond Neural Incompatibility: Easing Cross-Scale Knowledge Transfer in Large Language Models through Latent Semantic Alignment
Jian Gu
A. Aleti
Chunyang Chen
Hongyu Zhang
81
0
0
28 Oct 2025
C-SWAP: Explainability-Aware Structured Pruning for Efficient Neural Networks Compression
C-SWAP: Explainability-Aware Structured Pruning for Efficient Neural Networks Compression
Baptiste Bauvin
Loïc Baret
Ola Ahmad
125
0
0
21 Oct 2025
Fine-grained Analysis of Brain-LLM Alignment through Input Attribution
Fine-grained Analysis of Brain-LLM Alignment through Input Attribution
Michela Proietti
Roberto Capobianco
Mariya Toneva
106
1
0
14 Oct 2025
Does LLM Focus on the Right Words? Mitigating Context Bias in LLM-based Recommenders
Does LLM Focus on the Right Words? Mitigating Context Bias in LLM-based Recommenders
Bohao Wang
Jiawei Chen
Feng Liu
Changwang Zhang
Jun Wang
Canghong Jin
Chun-Yen Chen
Can Wang
115
0
0
13 Oct 2025
Beyond Over-Refusal: Scenario-Based Diagnostics and Post-Hoc Mitigation for Exaggerated Refusals in LLMs
Beyond Over-Refusal: Scenario-Based Diagnostics and Post-Hoc Mitigation for Exaggerated Refusals in LLMs
Shuzhou Yuan
Ercong Nie
Yinuo Sun
Chenxuan Zhao
William LaCroix
Michael Färber
181
1
0
09 Oct 2025
HEMERA: A Human-Explainable Transformer Model for Estimating Lung Cancer Risk using GWAS Data
HEMERA: A Human-Explainable Transformer Model for Estimating Lung Cancer Risk using GWAS Data
Maria Mahbub
Robert J. Klein
Myvizhi Esai Selvan
Rowena Yip
Claudia Henschke
...
Eileen McAllister
Samuel M. Aguayo
Zeynep H. Gümüş
Ioana Danciu
VA Million Veteran Program
MedIm
111
0
0
08 Oct 2025
Cluster Paths: Navigating Interpretability in Neural Networks
Cluster Paths: Navigating Interpretability in Neural Networks
Nicholas M. Kroeger
Vincent Bindschaedler
123
0
0
08 Oct 2025
Chronological Thinking in Full-Duplex Spoken Dialogue Language Models
Chronological Thinking in Full-Duplex Spoken Dialogue Language Models
Donghang Wu
H. Zhang
Chen Chen
Tianyu Zhang
Fei Tian
...
Gang Yu
Hexin Liu
Nana Hou
Yuchen Hu
Eng Siong Chng
AuLLMKELMAI4CELRM
483
2
0
02 Oct 2025
DeepProv: Behavioral Characterization and Repair of Neural Networks via Inference Provenance Graph Analysis
DeepProv: Behavioral Characterization and Repair of Neural Networks via Inference Provenance Graph Analysis
Firas Ben Hmida
Abderrahmen Amich
Ata Kaboudi
Birhanu Eshete
AAMLGNN
192
0
0
30 Sep 2025
TDHook: A Lightweight Framework for Interpretability
TDHook: A Lightweight Framework for Interpretability
Yoann Poupart
AI4CE
129
0
0
29 Sep 2025
EVO-LRP: Evolutionary Optimization of LRP for Interpretable Model Explanations
EVO-LRP: Evolutionary Optimization of LRP for Interpretable Model Explanations
Emerald Zhang
Julian Weaver
Samantha R Santacruz
Edward Castillo
195
0
0
28 Sep 2025
On The Variability of Concept Activation Vectors
On The Variability of Concept Activation Vectors
Julia Wenkmann
Damien Garreau
AAML
126
1
0
28 Sep 2025
Dynamical Modeling of Behaviorally Relevant Spatiotemporal Patterns in Neural Imaging Data
Dynamical Modeling of Behaviorally Relevant Spatiotemporal Patterns in Neural Imaging Data
Mohammad Hosseini
Maryam M. Shanechi
AI4CE
109
4
0
23 Sep 2025
Deep Learning-Driven Peptide Classification in Biological Nanopores
Deep Learning-Driven Peptide Classification in Biological Nanopores
S. Tovey
Julian Hoßbach
Sandro Kuppel
Tobias Ensslen
Jan C. Behrends
Christian Holm
117
0
0
17 Sep 2025
Amulet: a Python Library for Assessing Interactions Among ML Defenses and Risks
Amulet: a Python Library for Assessing Interactions Among ML Defenses and Risks
Asim Waheed
Vasisht Duddu
Rui Zhang
S. Szyller
AAML
235
1
0
15 Sep 2025
Evaluation of Black-Box XAI Approaches for Predictors of Values of Boolean Formulae
Evaluation of Black-Box XAI Approaches for Predictors of Values of Boolean Formulae
Stav Armoni-Friedmann
Hana Chockler
David A. Kelly
103
0
0
12 Sep 2025
Functional Groups are All you Need for Chemically Interpretable Molecular Property Prediction
Functional Groups are All you Need for Chemically Interpretable Molecular Property Prediction
Roshan Balaji
Joe Bobby
Nirav Pravinbhai Bhatt
AI4CE
124
0
0
11 Sep 2025
Assessment of deep learning models integrated with weather and environmental variables for wildfire spread prediction and a case study of the 2023 Maui fires
Assessment of deep learning models integrated with weather and environmental variables for wildfire spread prediction and a case study of the 2023 Maui fires
Jiyeon Kim
Yingjie Hu
Negar Elhami-Khorasani
Kai Sun
Ryan Zhenqi Zhou
83
0
0
05 Sep 2025
An Empirical Evaluation of Factors Affecting SHAP Explanation of Time Series Classification
An Empirical Evaluation of Factors Affecting SHAP Explanation of Time Series Classification
D. Serramazza
Nikos Papadeas
Zahraa Abdallah
Georgiana Ifrim
AI4TSFAtt
211
0
0
03 Sep 2025
AnomalyExplainer Explainable AI for LLM-based anomaly detection using BERTViz and Captum
AnomalyExplainer Explainable AI for LLM-based anomaly detection using BERTViz and Captum
Prasasthy Balasubramanian
Dumindu Kankanamge
Ekaterina Gilman
Mourad Oussalah
88
0
0
26 Aug 2025
Interpreting the Effects of Quantization on LLMs
Interpreting the Effects of Quantization on LLMs
Manpreet Singh
Hassan Sajjad
MQMILM
377
3
0
22 Aug 2025
Model Interpretability and Rationale Extraction by Input Mask Optimization
Model Interpretability and Rationale Extraction by Input Mask OptimizationAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Marc Brinner
Sina Zarriess
AAML
126
2
0
15 Aug 2025
On the Complexity-Faithfulness Trade-off of Gradient-Based Explanations
On the Complexity-Faithfulness Trade-off of Gradient-Based Explanations
Amir Mehrpanah
Matteo Gamba
Kevin Smith
Hossein Azizpour
FAtt
153
0
0
14 Aug 2025
SMA: Who Said That? Auditing Membership Leakage in Semi-Black-box RAG Controlling
SMA: Who Said That? Auditing Membership Leakage in Semi-Black-box RAG Controlling
Shixuan Sun
Yaning Tan
Ruoyu Chen
Jianjie Huang
Jingzhi Li
Xiaochun Cao
261
0
0
12 Aug 2025
Exploring Content and Social Connections of Fake News with Explainable Text and Graph Learning
Exploring Content and Social Connections of Fake News with Explainable Text and Graph Learning
Vítor N. Lourenço
A. Paes
and Tillman Weyde
320
1
0
11 Aug 2025
How Does a Deep Neural Network Look at Lexical Stress?
How Does a Deep Neural Network Look at Lexical Stress?
Itai Allouche
Itay Asael
Rotem Rousso
Vered Dassa
Ann R. Bradlow
Seung-Eun Kim
Matthew A. Goldrick
Joseph Keshet
96
1
0
10 Aug 2025
No Masks Needed: Explainable AI for Deriving Segmentation from Classification
No Masks Needed: Explainable AI for Deriving Segmentation from Classification
Mosong Ma
Tania Stathaki
Michalis Lazarou
163
0
0
06 Aug 2025
DeepFaith: A Domain-Free and Model-Agnostic Unified Framework for Highly Faithful Explanations
DeepFaith: A Domain-Free and Model-Agnostic Unified Framework for Highly Faithful Explanations
Yuhan Guo
Lizhong Ding
Shihan Jia
Yanyu Ren
P. Li
Jiarun Fu
Changsheng Li
Ye Yuan
Guoren Wang
157
0
0
05 Aug 2025
Uncovering Latent Connections in Indigenous Heritage: Semantic Pipelines for Cultural Preservation in Brazil
Uncovering Latent Connections in Indigenous Heritage: Semantic Pipelines for Cultural Preservation in Brazil
Luis Vitor Zerkowski
Nina S. T. Hirata
89
0
0
31 Jul 2025
POLARIS: Explainable Artificial Intelligence for Mitigating Power Side-Channel Leakage
POLARIS: Explainable Artificial Intelligence for Mitigating Power Side-Channel LeakageDesign Automation Conference (DAC), 2025
Tanzim Mahfuz
Sudipta Paria
Tasneem Suha
Swarup Bhunia
Prabuddha Chakraborty
143
0
0
29 Jul 2025
Large Learning Rates Simultaneously Achieve Robustness to Spurious Correlations and Compressibility
Large Learning Rates Simultaneously Achieve Robustness to Spurious Correlations and Compressibility
Melih Barsbey
Lucas Prieto
Stefanos Zafeiriou
Tolga Birdal
297
0
0
23 Jul 2025
PyG 2.0: Scalable Learning on Real World Graphs
PyG 2.0: Scalable Learning on Real World Graphs
Matthias Fey
Jinu Sunil
Akihiro Nitta
Rishi Puri
Manan Shah
...
Vid Kocijan
Zecheng Zhang
Xinwei He
J. E. Lenssen
J. Leskovec
GNNAI4CE
306
28
0
22 Jul 2025
OrdShap: Feature Position Importance for Sequential Black-Box Models
OrdShap: Feature Position Importance for Sequential Black-Box Models
Davin Hill
Brian L. Hill
A. Masoomi
Vijay S. Nori
Robert E. Tillman
Jennifer Dy
FAtt
315
0
0
16 Jul 2025
On the Effectiveness of Methods and Metrics for Explainable AI in Remote Sensing Image Scene Classification
On the Effectiveness of Methods and Metrics for Explainable AI in Remote Sensing Image Scene ClassificationIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (IEEE J-STARS), 2025
Jonas Klotz
Tom Burgert
Tim Siebert
396
0
0
08 Jul 2025
Bridging Ethical Principles and Algorithmic Methods: An Alternative Approach for Assessing Trustworthiness in AI Systems
Bridging Ethical Principles and Algorithmic Methods: An Alternative Approach for Assessing Trustworthiness in AI Systems
Michael Papademas
Xenia Ziouvelou
Antonis Troumpoukis
Vangelis Karkaletsis
291
0
0
28 Jun 2025
Leveraging Influence Functions for Resampling Data in Physics-Informed Neural Networks
Leveraging Influence Functions for Resampling Data in Physics-Informed Neural Networks
Jonas R. Naujoks
Aleksander Krasowski
Moritz Weckbecker
Galip Umit Yolcu
Thomas Wiegand
Sebastian Lapuschkin
Wojciech Samek
R. P. Klausen
TDIPINNAI4CE
274
1
0
19 Jun 2025
BMFM-RNA: An Open Framework for Building and Evaluating Transcriptomic Foundation Models
BMFM-RNA: An Open Framework for Building and Evaluating Transcriptomic Foundation Models
Bharath Dandala
Michael M. Danziger
Ella Barkan
Tanwi Biswas
Viatcheslav Gurev
...
Akira Koseki
Tal Kozlovski
Michal Rosen-Zvi
Yishai Shimoni
Ching-Huei Tsou
AI4CE
137
1
0
17 Jun 2025
Towards Large Language Models with Self-Consistent Natural Language Explanations
Towards Large Language Models with Self-Consistent Natural Language Explanations
Sahar Admoni
Ofra Amir
Assaf Hallak
Yftah Ziser
LRM
184
2
0
09 Jun 2025
Interpretation Meets Safety: A Survey on Interpretation Methods and Tools for Improving LLM Safety
Interpretation Meets Safety: A Survey on Interpretation Methods and Tools for Improving LLM Safety
Seongmin Lee
Aeree Cho
Grace C. Kim
ShengYun Peng
Mansi Phute
Duen Horng Chau
LM&MAAI4CE
273
3
0
05 Jun 2025
XAI-Units: Benchmarking Explainability Methods with Unit Tests
XAI-Units: Benchmarking Explainability Methods with Unit TestsConference on Fairness, Accountability and Transparency (FAccT), 2025
Jun Rui Lee
Sadegh Emami
Michael David Hollins
Timothy C. H. Wong
Carlos Ignacio Villalobos Sánchez
Francesca Toni
Dekai Zhang
Adam Dejl
219
4
0
01 Jun 2025
ExplainBench: A Benchmark Framework for Local Model Explanations in Fairness-Critical Applications
ExplainBench: A Benchmark Framework for Local Model Explanations in Fairness-Critical Applications
James Afful
FAttFedML
90
0
0
31 May 2025
Soft-CAM: Making black box models self-explainable for high-stakes decisions
K. Djoumessi
Philipp Berens
FAttBDL
402
1
0
23 May 2025
On the reliability of feature attribution methods for speech classification
On the reliability of feature attribution methods for speech classification
Gaofei Shen
Hosein Mohebbi
Arianna Bisazza
Afra Alishahi
Grzegorz Chrupała
362
0
0
22 May 2025
Attributional Safety Failures in Large Language Models under Code-Mixed Perturbations
Attributional Safety Failures in Large Language Models under Code-Mixed Perturbations
Somnath Banerjee
Pratyush Chatterjee
Shanu Kumar
Sayan Layek
Parag Agrawal
Rima Hazra
Animesh Mukherjee
AAML
490
0
0
20 May 2025
123456789
Next