Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2204.05426
Cited By
ProtoTEx: Explaining Model Decisions with Prototype Tensors
11 April 2022
Anubrata Das
Chitrank Gupta
Venelin Kovatchev
Matthew Lease
J. Li
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ProtoTEx: Explaining Model Decisions with Prototype Tensors"
20 / 20 papers shown
Title
A Transformer and Prototype-based Interpretable Model for Contextual Sarcasm Detection
Ximing Wen
Rezvaneh Rezapour
38
0
0
14 Mar 2025
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
Greta Warren
Irina Shklovski
Isabelle Augenstein
OffRL
70
4
0
13 Feb 2025
Regulation of Language Models With Interpretability Will Likely Result In A Performance Trade-Off
Eoin M. Kenny
Julie A. Shah
66
0
0
12 Dec 2024
GAProtoNet: A Multi-head Graph Attention-based Prototypical Network for Interpretable Text Classification
Ximing Wen
Wenjuan Tan
Rosina O. Weber
14
2
0
20 Sep 2024
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
32
10
0
27 Jul 2024
Exploring the Plausibility of Hate and Counter Speech Detectors with Explainable AI
Adrian Jaques Böck
D. Slijepcevic
Matthias Zeppelzauer
42
0
0
25 Jul 2024
They Look Like Each Other: Case-based Reasoning for Explainable Depression Detection on Twitter using Large Language Models
Mohammad Saeid Mahdavinejad
Peyman Adibi
A. Monadjemi
Pascal Hitzler
21
0
0
21 Jul 2024
Robust Text Classification: Analyzing Prototype-Based Networks
Zhivar Sourati
D. Deshpande
Filip Ilievski
Kiril Gashteovski
S. Saralajew
OOD
OffRL
26
2
0
11 Nov 2023
Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models
Sean Xie
Soroush Vosoughi
Saeed Hassanpour
41
3
0
03 Nov 2023
Interpretable-by-Design Text Understanding with Iteratively Generated Concept Bottleneck
Josh Magnus Ludan
Qing Lyu
Yue Yang
Liam Dugan
Mark Yatskar
Chris Callison-Burch
24
4
0
30 Oct 2023
Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong
Chenglei Si
Navita Goyal
Sherry Tongshuang Wu
Chen Zhao
Shi Feng
Hal Daumé
Jordan L. Boyd-Graber
LRM
36
39
0
19 Oct 2023
InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations
Nils Feldhus
Qianli Wang
Tatiana Anikina
Sahil Chopra
Cennet Oguz
Sebastian Möller
24
9
0
09 Oct 2023
The State of Human-centered NLP Technology for Fact-checking
Anubrata Das
Houjiang Liu
Venelin Kovatchev
Matthew Lease
HILM
19
61
0
08 Jan 2023
Explainability of Text Processing and Retrieval Methods: A Critical Survey
Sourav Saha
Debapriyo Majumdar
Mandar Mitra
8
5
0
14 Dec 2022
Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments
Zhivar Sourati
Vishnu Priya Prasanna Venkatesh
D. Deshpande
Himanshu Rawlani
Filip Ilievski
Hông-Ân Sandlin
Alain Mermoud
AAML
29
20
0
12 Dec 2022
Intermediate Entity-based Sparse Interpretable Representation Learning
Diego Garcia-Olano
Yasumasa Onoe
Joydeep Ghosh
Byron C. Wallace
9
2
0
03 Dec 2022
This Patient Looks Like That Patient: Prototypical Networks for Interpretable Diagnosis Prediction from Clinical Text
Betty van Aken
Jens-Michalis Papaioannou
M. Naik
G. Eleftheriadis
Wolfgang Nejdl
Felix Alexander Gers
Alexander Loser
22
10
0
16 Oct 2022
Benchmarking and Survey of Explanation Methods for Black Box Models
F. Bodria
F. Giannotti
Riccardo Guidotti
Francesca Naretto
D. Pedreschi
S. Rinzivillo
XAI
33
218
0
25 Feb 2021
Explainable Automated Fact-Checking for Public Health Claims
Neema Kotonya
Francesca Toni
216
249
0
19 Oct 2020
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,672
0
28 Feb 2017
1