ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.02162
  4. Cited By
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks
  against Interpretable Models
v1v2v3 (latest)

AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models

Proceedings on Privacy Enhancing Technologies (PoPETs), 2023
4 February 2023
Abdullah Çaglar Öksüz
Anisa Halimi
Erman Ayday
    ELMAAML
ArXiv (abs)PDFHTMLGithub (1★)

Papers citing "AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models"

3 / 3 papers shown
On the interplay of Explainability, Privacy and Predictive Performance with Explanation-assisted Model Extraction
On the interplay of Explainability, Privacy and Predictive Performance with Explanation-assisted Model Extraction
Fatima Ezzeddine
Rinad Akel
Ihab Sbeity
Silvia Giordano
Marc Langheinrich
Omran Ayoub
SILM
298
0
0
13 May 2025
From Counterfactuals to Trees: Competitive Analysis of Model Extraction Attacks
From Counterfactuals to Trees: Competitive Analysis of Model Extraction Attacks
Awa Khouna
Julien Ferry
Thibaut Vidal
AAML
320
1
0
07 Feb 2025
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
2.7K
21,148
0
16 Feb 2016
1
Page 1 of 1