ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.00130
  4. Cited By
ExSum: From Local Explanations to Model Understanding

ExSum: From Local Explanations to Model Understanding

30 April 2022
Yilun Zhou
Marco Tulio Ribeiro
J. Shah
    FAtt
    LRM
ArXivPDFHTML

Papers citing "ExSum: From Local Explanations to Model Understanding"

18 / 18 papers shown
Title
Explanation sensitivity to the randomness of large language models: the
  case of journalistic text classification
Explanation sensitivity to the randomness of large language models: the case of journalistic text classification
Jérémie Bogaert
Marie-Catherine de Marneffe
Antonin Descampe
Louis Escouflaire
Cedrick Fairon
François-Xavier Standaert
24
1
0
07 Oct 2024
Faithfulness and the Notion of Adversarial Sensitivity in NLP
  Explanations
Faithfulness and the Notion of Adversarial Sensitivity in NLP Explanations
Supriya Manna
Niladri Sett
AAML
29
2
0
26 Sep 2024
Exploring the Trade-off Between Model Performance and Explanation
  Plausibility of Text Classifiers Using Human Rationales
Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales
Lucas Resck
Marcos M. Raimundo
Jorge Poco
47
1
0
03 Apr 2024
What is the focus of XAI in UI design? Prioritizing UI design principles
  for enhancing XAI user experience
What is the focus of XAI in UI design? Prioritizing UI design principles for enhancing XAI user experience
Dian Lei
Yao He
Jianyou Zeng
34
1
0
21 Feb 2024
OpenHEXAI: An Open-Source Framework for Human-Centered Evaluation of
  Explainable Machine Learning
OpenHEXAI: An Open-Source Framework for Human-Centered Evaluation of Explainable Machine Learning
Jiaqi Ma
Vivian Lai
Yiming Zhang
Chacha Chen
Paul Hamilton
Davor Ljubenkov
Himabindu Lakkaraju
Chenhao Tan
ELM
19
3
0
20 Feb 2024
Evaluating the Utility of Model Explanations for Model Development
Evaluating the Utility of Model Explanations for Model Development
Shawn Im
Jacob Andreas
Yilun Zhou
XAI
FAtt
ELM
19
1
0
10 Dec 2023
Can Large Language Models Explain Themselves? A Study of LLM-Generated
  Self-Explanations
Can Large Language Models Explain Themselves? A Study of LLM-Generated Self-Explanations
Shiyuan Huang
Siddarth Mamidanna
Shreedhar Jangam
Yilun Zhou
Leilani H. Gilpin
LRM
MILM
ELM
43
66
0
17 Oct 2023
Fixing confirmation bias in feature attribution methods via semantic
  match
Fixing confirmation bias in feature attribution methods via semantic match
Giovanni Cina
Daniel Fernandez-Llaneza
Ludovico Deponte
Nishant Mishra
Tabea E. Rober
Sandro Pezzelle
Iacer Calixto
Rob Goedhart
cS. .Ilker Birbil
FAtt
25
0
0
03 Jul 2023
Towards Reconciling Usability and Usefulness of Explainable AI
  Methodologies
Towards Reconciling Usability and Usefulness of Explainable AI Methodologies
Pradyumna Tambwekar
Matthew C. Gombolay
28
8
0
13 Jan 2023
Towards Faithful Model Explanation in NLP: A Survey
Towards Faithful Model Explanation in NLP: A Survey
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
109
107
0
22 Sep 2022
Summarization Programs: Interpretable Abstractive Summarization with
  Neural Modular Trees
Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees
Swarnadeep Saha
Shiyue Zhang
Peter Hase
Joey Tianyi Zhou
26
19
0
21 Sep 2022
Gaussian Process Surrogate Models for Neural Networks
Gaussian Process Surrogate Models for Neural Networks
Michael Y. Li
Erin Grant
Thomas L. Griffiths
BDL
SyDa
32
7
0
11 Aug 2022
On Interactive Explanations as Non-Monotonic Reasoning
On Interactive Explanations as Non-Monotonic Reasoning
Guilherme Paulino-Passos
Francesca Toni
FAtt
LRM
11
2
0
30 Jul 2022
Toward Transparent AI: A Survey on Interpreting the Inner Structures of
  Deep Neural Networks
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
Tilman Raukur
A. Ho
Stephen Casper
Dylan Hadfield-Menell
AAML
AI4CE
23
124
0
27 Jul 2022
Analyzing Explainer Robustness via Probabilistic Lipschitzness of
  Prediction Functions
Analyzing Explainer Robustness via Probabilistic Lipschitzness of Prediction Functions
Zulqarnain Khan
Davin Hill
A. Masoomi
Joshua Bone
Jennifer Dy
AAML
41
3
0
24 Jun 2022
The Solvability of Interpretability Evaluation Metrics
The Solvability of Interpretability Evaluation Metrics
Yilun Zhou
J. Shah
68
8
0
18 May 2022
The Irrationality of Neural Rationale Models
The Irrationality of Neural Rationale Models
Yiming Zheng
Serena Booth
J. Shah
Yilun Zhou
32
16
0
14 Oct 2021
Low Frequency Names Exhibit Bias and Overfitting in Contextualizing
  Language Models
Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
Robert Wolfe
Aylin Caliskan
85
51
0
01 Oct 2021
1