ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.07896
  4. Cited By
Captum: A unified and generic model interpretability library for PyTorch

Captum: A unified and generic model interpretability library for PyTorch

16 September 2020
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
Jonathan Reynolds
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
    FAtt
ArXivPDFHTML

Papers citing "Captum: A unified and generic model interpretability library for PyTorch"

50 / 365 papers shown
Title
PnPXAI: A Universal XAI Framework Providing Automatic Explanations Across Diverse Modalities and Models
PnPXAI: A Universal XAI Framework Providing Automatic Explanations Across Diverse Modalities and Models
Seongun Kim
Sol-A. Kim
Geonhyeong Kim
Enver Menadjiev
Chanwoo Lee
Seongwook Chung
Nari Kim
Jaesik Choi
24
0
0
15 May 2025
Implet: A Post-hoc Subsequence Explainer for Time Series Models
Implet: A Post-hoc Subsequence Explainer for Time Series Models
Fanyu Meng
Ziwen Kan
Shahbaz Rezaei
Z. Kong
Xin Chen
Xin Liu
AI4TS
14
0
0
13 May 2025
ABE: A Unified Framework for Robust and Faithful Attribution-Based Explainability
ABE: A Unified Framework for Robust and Faithful Attribution-Based Explainability
Zhiyu Zhu
Jiayu Zhang
Zhibo Jin
Fang Chen
Jianlong Zhou
FAtt
24
0
0
03 May 2025
Polysemy of Synthetic Neurons Towards a New Type of Explanatory Categorical Vector Spaces
Polysemy of Synthetic Neurons Towards a New Type of Explanatory Categorical Vector Spaces
Michael Pichat
William Pogrund
Paloma Pichat
Judicael Poumay
Armanouche Gasparian
Samuel Demarchi
Martin Corbet
Alois Georgeon
Michael Veillet-Guillem
MILM
24
0
0
30 Apr 2025
Dual Explanations via Subgraph Matching for Malware Detection
Dual Explanations via Subgraph Matching for Malware Detection
Hossein Shokouhinejad
Roozbeh Razavi-Far
Griffin Higgins
Ali Ghorbani
AAML
36
0
0
29 Apr 2025
Towards a deep learning approach for classifying treatment response in glioblastomas
Towards a deep learning approach for classifying treatment response in glioblastomas
Ana Matoso
Catarina Passarinho
Marta P. Loureiro
José Maria Moreira
Patrícia Figueiredo
Rita G. Nunes
AI4CE
26
0
0
25 Apr 2025
BrainPrompt: Multi-Level Brain Prompt Enhancement for Neurological Condition Identification
BrainPrompt: Multi-Level Brain Prompt Enhancement for Neurological Condition Identification
Jiaxing Xu
Kai He
Yue Tang
Wei Li
Mengcheng Lan
Xia Dong
Yiping Ke
Mengling Feng
14
0
0
12 Apr 2025
Secure Diagnostics: Adversarial Robustness Meets Clinical Interpretability
Secure Diagnostics: Adversarial Robustness Meets Clinical Interpretability
Mohammad Hossein Najafi
Mohammad Morsali
Mohammadreza Pashanejad
Saman Soleimani Roudi
Mohammad Norouzi
Saeed Bagheri Shouraki
AAML
23
0
0
07 Apr 2025
shapr: Explaining Machine Learning Models with Conditional Shapley Values in R and Python
shapr: Explaining Machine Learning Models with Conditional Shapley Values in R and Python
Martin Jullum
Lars Henry Berge Olsen
Jon Lachmann
Annabelle Redelmeier
TDI
FAtt
68
2
0
02 Apr 2025
Fourier Feature Attribution: A New Efficiency Attribution Method
Fourier Feature Attribution: A New Efficiency Attribution Method
Zechen Liu
Feiyang Zhang
Wei Song
X. Li
Wei Wei
FAtt
57
0
0
02 Apr 2025
Which LIME should I trust? Concepts, Challenges, and Solutions
Which LIME should I trust? Concepts, Challenges, and Solutions
Patrick Knab
Sascha Marton
Udo Schlegel
Christian Bartelt
FAtt
38
0
0
31 Mar 2025
Pairwise Matching of Intermediate Representations for Fine-grained Explainability
Pairwise Matching of Intermediate Representations for Fine-grained Explainability
Lauren Shrack
T. Haucke
Antoine Salaün
Arjun Subramonian
Sara Beery
49
0
0
28 Mar 2025
Explainable AI-Guided Efficient Approximate DNN Generation for Multi-Pod Systolic Arrays
Explainable AI-Guided Efficient Approximate DNN Generation for Multi-Pod Systolic Arrays
Ayesha Siddique
Khurram Khalil
K. A. Hoque
37
0
0
20 Mar 2025
Intra-neuronal attention within language models Relationships between activation and semantics
Intra-neuronal attention within language models Relationships between activation and semantics
Michael Pichat
William Pogrund
Paloma Pichat
Armanouche Gasparian
Samuel Demarchi
Corbet Alois Georgeon
Michael Veillet-Guillem
MILM
41
0
0
17 Mar 2025
Axiomatic Explainer Globalness via Optimal Transport
Axiomatic Explainer Globalness via Optimal Transport
Davin Hill
Josh Bone
A. Masoomi
Max Torop
Jennifer Dy
97
1
0
13 Mar 2025
Tangentially Aligned Integrated Gradients for User-Friendly Explanations
Lachlan Simpson
Federico Costanza
Kyle Millar
A. Cheng
Cheng-Chew Lim
Hong-Gunn Chew
FAtt
78
2
0
11 Mar 2025
Now you see me! A framework for obtaining class-relevant saliency maps
Nils Philipp Walter
Jilles Vreeken
Jonas Fischer
FAtt
40
0
0
10 Mar 2025
FW-Shapley: Real-time Estimation of Weighted Shapley Values
Pranoy Panda
Siddharth Tandon
V. Balasubramanian
TDI
65
0
0
09 Mar 2025
Towards Locally Explaining Prediction Behavior via Gradual Interventions and Measuring Property Gradients
Niklas Penzel
Joachim Denzler
FAtt
48
0
0
07 Mar 2025
Synthetic Categorical Restructuring large Or How AIs Gradually Extract Efficient Regularities from Their Experience of the World
Michael Pichat
William Pogrund
Paloma Pichat
Armanouche Gasparian
Samuel Demarchi
Martin Corbet
Alois Georgeon
Theo Dasilva
Michael Veillet-Guillem
47
2
0
25 Feb 2025
Time-series attribution maps with regularized contrastive learning
Time-series attribution maps with regularized contrastive learning
Steffen Schneider
Rodrigo González Laiz
Anastasiia Filippova
Markus Frey
Mackenzie W. Mathis
BDL
FAtt
CML
AI4TS
76
0
0
17 Feb 2025
Generalized Attention Flow: Feature Attribution for Transformer Models via Maximum Flow
Generalized Attention Flow: Feature Attribution for Transformer Models via Maximum Flow
Behrooz Azarkhalili
Maxwell Libbrecht
34
0
0
14 Feb 2025
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Célia Wafa Ayad
Thomas Bonnier
Benjamin Bosch
Sonali Parbhoo
Jesse Read
FAtt
XAI
98
0
0
11 Feb 2025
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
Shreyash Arya
Sukrut Rao
Moritz Bohle
Bernt Schiele
68
2
0
28 Jan 2025
xMIL: Insightful Explanations for Multiple Instance Learning in Histopathology
xMIL: Insightful Explanations for Multiple Instance Learning in Histopathology
Julius Hense
M. J. Idaji
Oliver Eberle
Thomas Schnake
Jonas Dippel
Laure Ciernik
Oliver Buchstab
Andreas Mock
Frederick Klauschen
Klaus-Robert Müller
49
3
0
08 Jan 2025
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
34
2
0
03 Jan 2025
How Do Artificial Intelligences Think? The Three Mathematico-Cognitive Factors of Categorical Segmentation Operated by Synthetic Neurons
How Do Artificial Intelligences Think? The Three Mathematico-Cognitive Factors of Categorical Segmentation Operated by Synthetic Neurons
Michael Pichat
William Pogrund
Armanush Gasparian
Paloma Pichat
Samuel Demarchi
Michael Veillet-Guillem
42
3
0
26 Dec 2024
Self-Supervised Radiograph Anatomical Region Classification -- How Clean
  Is Your Real-World Data?
Self-Supervised Radiograph Anatomical Region Classification -- How Clean Is Your Real-World Data?
Simon Langer
J. Ritter
R. Braren
Daniel Rueckert
Paul Hager
74
0
0
20 Dec 2024
Can Input Attributions Interpret the Inductive Reasoning Process Elicited in In-Context Learning?
Can Input Attributions Interpret the Inductive Reasoning Process Elicited in In-Context Learning?
Mengyu Ye
Tatsuki Kuribayashi
Goro Kobayashi
Jun Suzuki
LRM
92
0
0
20 Dec 2024
MATCHED: Multimodal Authorship-Attribution To Combat Human Trafficking
  in Escort-Advertisement Data
MATCHED: Multimodal Authorship-Attribution To Combat Human Trafficking in Escort-Advertisement Data
V. Saxena
Benjamin Bashpole
Gijs van Dijck
Gerasimos Spanakis
72
0
0
18 Dec 2024
From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation
From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation
Kristoffer Wickstrøm
Marina M.-C. Höhne
Anna Hedström
AAML
79
2
0
07 Dec 2024
Quantized and Interpretable Learning Scheme for Deep Neural Networks in
  Classification Task
Quantized and Interpretable Learning Scheme for Deep Neural Networks in Classification Task
Alireza Maleki
Mahsa Lavaei
Mohsen Bagheritabar
Salar Beigzad
Zahra Abadi
MQ
67
0
0
05 Dec 2024
Establishing and Evaluating Trustworthy AI: Overview and Research
  Challenges
Establishing and Evaluating Trustworthy AI: Overview and Research Challenges
Dominik Kowald
S. Scher
Viktoria Pammer-Schindler
Peter Müllner
Kerstin Waxnegger
...
Andreas Truegler
Eduardo E. Veas
Roman Kern
Tomislav Nad
Simone Kopeinik
34
3
0
15 Nov 2024
LA4SR: illuminating the dark proteome with generative AI
LA4SR: illuminating the dark proteome with generative AI
David R. Nelson
Ashish Kumar Jaiswal
Noha Ismail
Alexandra Mystikou
Kourosh Salehi-Ashtiani
22
0
0
11 Nov 2024
An Open API Architecture to Discover the Trustworthy Explanation of
  Cloud AI Services
An Open API Architecture to Discover the Trustworthy Explanation of Cloud AI Services
Zerui Wang
Yan Liu
Jun Huang
49
1
0
05 Nov 2024
Differentially Private Integrated Decision Gradients (IDG-DP) for
  Radar-based Human Activity Recognition
Differentially Private Integrated Decision Gradients (IDG-DP) for Radar-based Human Activity Recognition
Idris Zakariyya
Linda Tran
Kaushik Bhargav Sivangi
Paul Henderson
F. Deligianni
26
0
0
04 Nov 2024
Identifying Spatio-Temporal Drivers of Extreme Events
Identifying Spatio-Temporal Drivers of Extreme Events
Mohamad Hakam Shams Eddin
Juergen Gall
AI4TS
48
0
0
31 Oct 2024
Transformers to Predict the Applicability of Symbolic Integration
  Routines
Transformers to Predict the Applicability of Symbolic Integration Routines
Rashid Barket
Uzma Shafiq
Matthew England
Juergen Gerhard
21
0
0
31 Oct 2024
Guided Game Level Repair via Explainable AI
Guided Game Level Repair via Explainable AI
Mahsa Bazzaz
Seth Cooper
49
1
0
30 Oct 2024
CNN Explainability with Multivector Tucker Saliency Maps for
  Self-Supervised Models
CNN Explainability with Multivector Tucker Saliency Maps for Self-Supervised Models
Aymene Mohammed Bouayed
Samuel Deslauriers-Gauthier
Adrian Iaccovelli
D. Naccache
25
0
0
30 Oct 2024
Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Elham Bagheri
Yalda Mohsenzadeh
21
0
0
19 Oct 2024
PromptExp: Multi-granularity Prompt Explanation of Large Language Models
PromptExp: Multi-granularity Prompt Explanation of Large Language Models
Ximing Dong
Shaowei Wang
Dayi Lin
Gopi Krishnan Rajbahadur
Boquan Zhou
Shichao Liu
Ahmed E. Hassan
AAML
LRM
25
1
0
16 Oct 2024
Rethinking Visual Counterfactual Explanations Through Region Constraint
Rethinking Visual Counterfactual Explanations Through Region Constraint
Bartlomiej Sobieski
Jakub Grzywaczewski
Bartlomiej Sadlej
Matthew Tivnan
P. Biecek
CML
41
0
0
16 Oct 2024
Contrastive learning of cell state dynamics in response to perturbations
Contrastive learning of cell state dynamics in response to perturbations
Soorya Pradeep
Alishba Imran
Ziwen Liu
Taylla Milena Theodoro
Eduardo Hirata-Miyasaki
...
Madhura Bhave
Sudip Khadka
Hunter Woosley
Carolina Arias
Shalin B. Mehta
23
0
0
15 Oct 2024
$\texttt{dattri}$: A Library for Efficient Data Attribution
dattri\texttt{dattri}dattri: A Library for Efficient Data Attribution
Junwei Deng
Ting-Wei Li
Shiyuan Zhang
Shixuan Liu
Yijun Pan
Hao Huang
Xinhe Wang
Pingbang Hu
Xingjian Zhang
Jiaqi W. Ma
TDI
34
3
0
06 Oct 2024
Explainable Earth Surface Forecasting under Extreme Events
Explainable Earth Surface Forecasting under Extreme Events
Oscar J. Pellicer-Valero
Miguel-Ángel Fernández-Torres
Chaonan Ji
Miguel D. Mahecha
Gustau Camps-Valls
21
0
0
02 Oct 2024
shapiq: Shapley Interactions for Machine Learning
shapiq: Shapley Interactions for Machine Learning
Maximilian Muschalik
Hubert Baniecki
Fabian Fumagalli
Patrick Kolpaczki
Barbara Hammer
Eyke Hüllermeier
TDI
27
9
0
02 Oct 2024
One Wave to Explain Them All: A Unifying Perspective on Post-hoc
  Explainability
One Wave to Explain Them All: A Unifying Perspective on Post-hoc Explainability
Gabriel Kasmi
Amandine Brunetto
Thomas Fel
Jayneel Parekh
AAML
FAtt
25
0
0
02 Oct 2024
A Methodology for Explainable Large Language Models with Integrated
  Gradients and Linguistic Analysis in Text Classification
A Methodology for Explainable Large Language Models with Integrated Gradients and Linguistic Analysis in Text Classification
Marina Ribeiro
Bárbara Malcorra
Natália B. Mota
Rodrigo Wilkens
Aline Villavicencio
Lilian C. Hubner
César Rennó-Costa
19
1
0
30 Sep 2024
Enhancing Feature Selection and Interpretability in AI Regression Tasks
  Through Feature Attribution
Enhancing Feature Selection and Interpretability in AI Regression Tasks Through Feature Attribution
Alexander Hinterleitner
T. Bartz-Beielstein
Richard Schulz
Sebastian Spengler
Thomas Winter
Christoph Leitenmeier
34
1
0
25 Sep 2024
12345678
Next