ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.16978
  4. Cited By
Towards User-Focused Research in Training Data Attribution for
  Human-Centered Explainable AI

Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI

25 September 2024
Elisa Nguyen
Johannes Bertram
Evgenii Kortukov
Jean Y. Song
Seong Joon Oh
    TDI
ArXivPDFHTML

Papers citing "Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI"

45 / 45 papers shown
Title
Data Debiasing with Datamodels (D3M): Improving Subgroup Robustness via
  Data Selection
Data Debiasing with Datamodels (D3M): Improving Subgroup Robustness via Data Selection
Saachi Jain
Kimia Hamidieh
Kristian Georgiev
Andrew Ilyas
Marzyeh Ghassemi
Aleksander Madry
54
3
0
24 Jun 2024
What is Your Data Worth to GPT? LLM-Scale Data Valuation with Influence
  Functions
What is Your Data Worth to GPT? LLM-Scale Data Valuation with Influence Functions
Sang Keun Choe
Hwijeen Ahn
Juhan Bae
Kewen Zhao
Minsoo Kang
...
Teruko Mitamura
Jeff Schneider
Eduard Hovy
Roger C. Grosse
Eric Xing
TDI
50
33
0
22 May 2024
Training Data Attribution via Approximate Unrolled Differentiation
Training Data Attribution via Approximate Unrolled Differentiation
Juhan Bae
Wu Lin
Jonathan Lorraine
Roger C. Grosse
TDI
MU
90
14
0
20 May 2024
Dissecting users' needs for search result explanations
Dissecting users' needs for search result explanations
Prerna Juneja
Wenjuan Zhang
Alison Smith-Renner
Hemank Lamba
Joel R. Tetreault
Alex Jaimes
FAtt
31
5
0
29 Jan 2024
Error Discovery by Clustering Influence Embeddings
Error Discovery by Clustering Influence Embeddings
Fulton Wang
Julius Adebayo
Sarah Tan
Diego Garcia-Olano
Narine Kokhlikyan
49
3
0
07 Dec 2023
Intriguing Properties of Data Attribution on Diffusion Models
Intriguing Properties of Data Attribution on Diffusion Models
Xiaosen Zheng
Tianyu Pang
Chao Du
Jing Jiang
Min Lin
TDI
57
24
1
01 Nov 2023
DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and
  Diffusion Models
DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models
Yongchan Kwon
Eric Wu
K. Wu
James Zou
DiffM
TDI
25
59
0
02 Oct 2023
Studying Large Language Model Generalization with Influence Functions
Studying Large Language Model Generalization with Influence Functions
Roger C. Grosse
Juhan Bae
Cem Anil
Nelson Elhage
Alex Tamkin
...
Karina Nguyen
Nicholas Joseph
Sam McCandlish
Jared Kaplan
Sam Bowman
TDI
14
164
0
07 Aug 2023
ConvXAI: Delivering Heterogeneous AI Explanations via Conversations to
  Support Human-AI Scientific Writing
ConvXAI: Delivering Heterogeneous AI Explanations via Conversations to Support Human-AI Scientific Writing
Hua Shen
Huang Chieh-Yang
Tongshuang Wu
Ting-Hao 'Kenneth' Huang
46
38
0
16 May 2023
In Search of Verifiability: Explanations Rarely Enable Complementary
  Performance in AI-Advised Decision Making
In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making
Raymond Fok
Daniel S. Weld
48
62
0
12 May 2023
TRAK: Attributing Model Behavior at Scale
TRAK: Attributing Model Behavior at Scale
Sung Min Park
Kristian Georgiev
Andrew Ilyas
Guillaume Leclerc
Aleksander Madry
TDI
49
138
0
24 Mar 2023
Revisiting the Fragility of Influence Functions
Revisiting the Fragility of Influence Functions
Jacob R. Epifano
Ravichandran Ramachandran
A. Masino
Ghulam Rasool
TDI
34
14
0
22 Mar 2023
Users are the North Star for AI Transparency
Users are the North Star for AI Transparency
Alex Mei
Michael Stephen Saxon
Shiyu Chang
Zachary Chase Lipton
William Yang Wang
45
9
0
09 Mar 2023
Invisible Users: Uncovering End-Users' Requirements for Explainable AI
  via Explanation Forms and Goals
Invisible Users: Uncovering End-Users' Requirements for Explainable AI via Explanation Forms and Goals
Weina Jin
Jianyu Fan
D. Gromala
Philippe Pasquier
Ghassan Hamarneh
38
7
0
10 Feb 2023
Training Data Influence Analysis and Estimation: A Survey
Training Data Influence Analysis and Estimation: A Survey
Zayd Hammoudeh
Daniel Lowd
TDI
42
89
0
09 Dec 2022
Robust Speech Recognition via Large-Scale Weak Supervision
Robust Speech Recognition via Large-Scale Weak Supervision
Alec Radford
Jong Wook Kim
Tao Xu
Greg Brockman
C. McLeavey
Ilya Sutskever
OffRL
89
3,442
0
06 Dec 2022
Towards Human-centered Explainable AI: A Survey of User Studies for
  Model Explanations
Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations
Yao Rong
Tobias Leemann
Thai-trang Nguyen
Lisa Fiedler
Peizhu Qian
Vaibhav Unhelkar
Tina Seidel
Gjergji Kasneci
Enkelejda Kasneci
ELM
46
93
0
20 Oct 2022
If Influence Functions are the Answer, Then What is the Question?
If Influence Functions are the Answer, Then What is the Question?
Juhan Bae
Nathan Ng
Alston Lo
Marzyeh Ghassemi
Roger C. Grosse
TDI
43
92
0
12 Sep 2022
Visual correspondence-based explanations improve AI robustness and
  human-AI team accuracy
Visual correspondence-based explanations improve AI robustness and human-AI team accuracy
Giang Nguyen
Mohammad Reza Taesiri
Anh Totti Nguyen
35
42
0
26 Jul 2022
Datamodels: Predicting Predictions from Training Data
Datamodels: Predicting Predictions from Training Data
Andrew Ilyas
Sung Min Park
Logan Engstrom
Guillaume Leclerc
Aleksander Madry
TDI
71
133
0
01 Feb 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
55
404
0
20 Jan 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
74
114
0
06 Dec 2021
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
Upol Ehsan
Samir Passi
Q. V. Liao
Larry Chan
I-Hsiang Lee
Michael J. Muller
Mark O. Riedl
39
88
0
28 Jul 2021
Combining Feature and Instance Attribution to Detect Artifacts
Combining Feature and Instance Attribution to Detect Artifacts
Pouya Pezeshkpour
Sarthak Jain
Sameer Singh
Byron C. Wallace
TDI
44
43
0
01 Jul 2021
Interactive Label Cleaning with Example-based Explanations
Interactive Label Cleaning with Example-based Explanations
Stefano Teso
A. Bontempelli
Fausto Giunchiglia
Andrea Passerini
43
45
0
07 Jun 2021
Input Similarity from the Neural Network Perspective
Input Similarity from the Neural Network Perspective
Guillaume Charpiat
N. Girard
Loris Felardos
Y. Tarabalka
32
70
0
10 Feb 2021
Expanding Explainability: Towards Social Transparency in AI systems
Expanding Explainability: Towards Social Transparency in AI systems
Upol Ehsan
Q. V. Liao
Michael J. Muller
Mark O. Riedl
Justin D. Weisz
47
398
0
12 Jan 2021
What Neural Networks Memorize and Why: Discovering the Long Tail via
  Influence Estimation
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
Vitaly Feldman
Chiyuan Zhang
TDI
63
452
0
09 Aug 2020
Influence Functions in Deep Learning Are Fragile
Influence Functions in Deep Learning Are Fragile
S. Basu
Phillip E. Pope
Soheil Feizi
TDI
69
223
0
25 Jun 2020
Estimating Training Data Influence by Tracing Gradient Descent
Estimating Training Data Influence by Tracing Gradient Descent
G. Pruthi
Frederick Liu
Mukund Sundararajan
Satyen Kale
TDI
21
389
0
19 Feb 2020
Human-centered Explainable AI: Towards a Reflective Sociotechnical
  Approach
Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach
Upol Ehsan
Mark O. Riedl
22
215
0
04 Feb 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
63
707
0
08 Jan 2020
PyTorch: An Imperative Style, High-Performance Deep Learning Library
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
13
41,890
0
03 Dec 2019
On Second-Order Group Influence Functions for Black-Box Predictions
On Second-Order Group Influence Functions for Black-Box Predictions
S. Basu
Xuchen You
Soheil Feizi
TDI
40
69
0
01 Nov 2019
On the Accuracy of Influence Functions for Measuring Group Effects
On the Accuracy of Influence Functions for Measuring Group Effects
Pang Wei Koh
Kai-Siang Ang
H. Teo
Percy Liang
TDI
25
186
0
30 May 2019
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
86
1,939
0
08 Oct 2018
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
176
4,216
0
22 Jun 2017
SmoothGrad: removing noise by adding noise
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
157
2,211
0
12 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
25
21,372
0
22 May 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
74
2,849
0
14 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
50
5,894
0
04 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
284
3,728
0
28 Feb 2017
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
96
16,694
0
16 Feb 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
658
192,387
0
10 Dec 2015
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
42
7,231
0
20 Dec 2013
1