Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2302.02162
Cited By
v1
v2
v3 (latest)
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models
Proceedings on Privacy Enhancing Technologies (PoPETs), 2023
4 February 2023
Abdullah Çaglar Öksüz
Anisa Halimi
Erman Ayday
ELM
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Github (1★)
Papers citing
"AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models"
3 / 3 papers shown
On the interplay of Explainability, Privacy and Predictive Performance with Explanation-assisted Model Extraction
Fatima Ezzeddine
Rinad Akel
Ihab Sbeity
Silvia Giordano
Marc Langheinrich
Omran Ayoub
SILM
298
0
0
13 May 2025
From Counterfactuals to Trees: Competitive Analysis of Model Extraction Attacks
Awa Khouna
Julien Ferry
Thibaut Vidal
AAML
320
1
0
07 Feb 2025
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
2.7K
21,148
0
16 Feb 2016
1
Page 1 of 1