ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.20477
  4. Cited By
Exploring Practitioner Perspectives On Training Data Attribution
  Explanations

Exploring Practitioner Perspectives On Training Data Attribution Explanations

31 October 2023
Elisa Nguyen
Evgenii Kortukov
Jean Y. Song
Seong Joon Oh
    TDI
ArXivPDFHTML

Papers citing "Exploring Practitioner Perspectives On Training Data Attribution Explanations"

19 / 19 papers shown
Title
TRAK: Attributing Model Behavior at Scale
TRAK: Attributing Model Behavior at Scale
Sung Min Park
Kristian Georgiev
Andrew Ilyas
Guillaume Leclerc
Aleksander Madry
TDI
49
138
0
24 Mar 2023
Training Data Influence Analysis and Estimation: A Survey
Training Data Influence Analysis and Estimation: A Survey
Zayd Hammoudeh
Daniel Lowd
TDI
42
89
0
09 Dec 2022
Robust Speech Recognition via Large-Scale Weak Supervision
Robust Speech Recognition via Large-Scale Weak Supervision
Alec Radford
Jong Wook Kim
Tao Xu
Greg Brockman
C. McLeavey
Ilya Sutskever
OffRL
89
3,442
0
06 Dec 2022
Datamodels: Predicting Predictions from Training Data
Datamodels: Predicting Predictions from Training Data
Andrew Ilyas
Sung Min Park
Logan Engstrom
Guillaume Leclerc
Aleksander Madry
TDI
71
133
0
01 Feb 2022
Scaling Up Influence Functions
Scaling Up Influence Functions
Andrea Schioppa
Polina Zablotskaia
David Vilar
Artem Sokolov
TDI
56
95
0
06 Dec 2021
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
Upol Ehsan
Samir Passi
Q. V. Liao
Larry Chan
I-Hsiang Lee
Michael J. Muller
Mark O. Riedl
39
88
0
28 Jul 2021
Combining Feature and Instance Attribution to Detect Artifacts
Combining Feature and Instance Attribution to Detect Artifacts
Pouya Pezeshkpour
Sarthak Jain
Sameer Singh
Byron C. Wallace
TDI
44
43
0
01 Jul 2021
Interactive Label Cleaning with Example-based Explanations
Interactive Label Cleaning with Example-based Explanations
Stefano Teso
A. Bontempelli
Fausto Giunchiglia
Andrea Passerini
43
45
0
07 Jun 2021
FastIF: Scalable Influence Functions for Efficient Model Interpretation
  and Debugging
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging
Han Guo
Nazneen Rajani
Peter Hase
Joey Tianyi Zhou
Caiming Xiong
TDI
64
104
0
31 Dec 2020
Influence Functions in Deep Learning Are Fragile
Influence Functions in Deep Learning Are Fragile
S. Basu
Phillip E. Pope
Soheil Feizi
TDI
69
223
0
25 Jun 2020
Estimating Training Data Influence by Tracing Gradient Descent
Estimating Training Data Influence by Tracing Gradient Descent
G. Pruthi
Frederick Liu
Mukund Sundararajan
Satyen Kale
TDI
21
389
0
19 Feb 2020
Human-centered Explainable AI: Towards a Reflective Sociotechnical
  Approach
Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach
Upol Ehsan
Mark O. Riedl
22
215
0
04 Feb 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
63
707
0
08 Jan 2020
On the Accuracy of Influence Functions for Measuring Group Effects
On the Accuracy of Influence Functions for Measuring Group Effects
Pang Wei Koh
Kai-Siang Ang
H. Teo
Percy Liang
TDI
25
186
0
30 May 2019
Understanding the Origins of Bias in Word Embeddings
Understanding the Origins of Bias in Word Embeddings
Marc-Etienne Brunet
Colleen Alkalay-Houlihan
Ashton Anderson
R. Zemel
FaML
26
200
0
08 Oct 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
52
3,916
0
06 Feb 2018
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
25
21,372
0
22 May 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
74
2,849
0
14 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
284
3,728
0
28 Feb 2017
1