Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1609.02943
Cited By
Stealing Machine Learning Models via Prediction APIs
9 September 2016
Florian Tramèr
Fan Zhang
Ari Juels
Michael K. Reiter
Thomas Ristenpart
SILM
MLAU
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Stealing Machine Learning Models via Prediction APIs"
31 / 31 papers shown
Title
StyleRec: A Benchmark Dataset for Prompt Recovery in Writing Style Transformation
Shenyang Liu
Yang Gao
Shaoyan Zhai
Liqiang Wang
51
0
0
06 Apr 2025
Atlas: A Framework for ML Lifecycle Provenance & Transparency
Marcin Spoczynski
Marcela S. Melara
Siyang Song
136
1
0
26 Feb 2025
Examining the Threat Landscape: Foundation Models and Model Stealing
Ankita Raj
Deepankar Varma
Chetan Arora
AAML
178
1
0
25 Feb 2025
Encryption-Friendly LLM Architecture
Donghwan Rho
Taeseong Kim
Minje Park
Jung Woo Kim
Hyunsik Chae
Jung Hee Cheon
Ernest K. Ryu
143
2
0
24 Feb 2025
SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks
Yue Gao
Ilia Shumailov
Kassem Fawaz
AAML
178
0
0
21 Feb 2025
HateBench: Benchmarking Hate Speech Detectors on LLM-Generated Content and Hate Campaigns
Xinyue Shen
Yixin Wu
Y. Qu
Michael Backes
Savvas Zannettou
Yang Zhang
71
4
0
28 Jan 2025
Enhancing Privacy in the Early Detection of Sexual Predators Through Federated Learning and Differential Privacy
Khaoula Chehbouni
Martine De Cock
Gilles Caporossi
Afaf Taik
Reihaneh Rabbany
G. Farnadi
106
1
0
21 Jan 2025
Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks
Yixiao Xu
Binxing Fang
Rui Wang
Yinghai Zhou
S. Ji
Yuan Liu
Mohan Li
AAML
MIACV
96
0
0
16 Jan 2025
Position: A taxonomy for reporting and describing AI security incidents
L. Bieringer
Kevin Paeth
Andreas Wespi
Kathrin Grosse
Alexandre Alahi
Kathrin Grosse
118
0
0
19 Dec 2024
In-Context Experience Replay Facilitates Safety Red-Teaming of Text-to-Image Diffusion Models
Zhi-Yi Chin
Kuan-Chen Mu
Mario Fritz
Pin-Yu Chen
DiffM
111
1
0
25 Nov 2024
Poison-splat: Computation Cost Attack on 3D Gaussian Splatting
Jiahao Lu
Yifan Zhang
Qiuhong Shen
Xinchao Wang
Shuicheng Yan
3DGS
78
1
0
10 Oct 2024
Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning Services
Shaopeng Fu
Xuexue Sun
Ke Qing
Tianhang Zheng
Di Wang
AAML
MIACV
SILM
97
0
0
05 Aug 2024
Private Collaborative Edge Inference via Over-the-Air Computation
Selim F. Yilmaz
Burak Hasircioglu
Li Qiao
Deniz Gunduz
FedML
99
1
0
30 Jul 2024
Locking Machine Learning Models into Hardware
Eleanor Clifford
Adhithya Saravanan
Harry Langford
Cheng Zhang
Yiren Zhao
Robert D. Mullins
Ilia Shumailov
Jamie Hayes
66
0
0
31 May 2024
ModelShield: Adaptive and Robust Watermark against Model Extraction Attack
Kaiyi Pang
Tao Qi
Chuhan Wu
Minhao Bai
Minghu Jiang
Yongfeng Huang
AAML
WaLM
83
3
0
03 May 2024
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Guangjing Wang
Ce Zhou
Yuanda Wang
Bocheng Chen
Hanqing Guo
Qiben Yan
AAML
SILM
91
3
0
20 Nov 2023
GTree: GPU-Friendly Privacy-preserving Decision Tree Training and Inference
Qifan Wang
Shujie Cui
Lei Zhou
Ye Dong
Jianli Bai
Yun Sing Koh
Giovanni Russello
62
0
0
01 May 2023
MOVE: Effective and Harmless Ownership Verification via Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yang Bai
Yong Jiang
Shutao Xia
Xiaochun Cao
Kui Ren
AAML
71
13
0
04 Aug 2022
Energy-Latency Attacks via Sponge Poisoning
Antonio Emanuele Cinà
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
SILM
91
29
0
14 Mar 2022
Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation
Qilong Zhang
Chaoning Zhang
Chaoning Zhang
Chaoqun Li
Xuanhan Wang
Jingkuan Song
Lianli Gao
AAML
75
21
0
09 Mar 2022
Training Differentially Private Models with Secure Multiparty Computation
Sikha Pentyala
Davis Railsback
Ricardo Maia
Rafael Dowsley
David Melanson
Anderson C. A. Nascimento
Martine De Cock
33
14
0
05 Feb 2022
Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning
Xinjian Luo
Xiangqi Zhu
FedML
103
25
0
27 Apr 2020
Cryptanalytic Extraction of Neural Network Models
Nicholas Carlini
Matthew Jagielski
Ilya Mironov
FedML
MLAU
MIACV
AAML
98
135
0
10 Mar 2020
SNIFF: Reverse Engineering of Neural Networks with Fault Attacks
J. Breier
Dirmanto Jap
Xiaolu Hou
S. Bhasin
Yang Liu
47
53
0
23 Feb 2020
MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection
Anuj Dubey
Rosario Cammarota
Aydin Aysu
AAML
40
78
0
29 Oct 2019
Thieves on Sesame Street! Model Extraction of BERT-based APIs
Kalpesh Krishna
Gaurav Singh Tomar
Ankur P. Parikh
Nicolas Papernot
Mohit Iyyer
MIACV
MLAU
71
198
0
27 Oct 2019
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
101
2,142
0
21 Aug 2017
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAU
AAML
49
3,660
0
08 Feb 2016
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
92
1,580
0
27 Jun 2012
Query Strategies for Evading Convex-Inducing Classifiers
B. Nelson
Benjamin I. P. Rubinstein
Ling Huang
A. Joseph
Steven J. Lee
Satish Rao
J. D. Tygar
82
124
0
03 Jul 2010
Learning in a Large Function Space: Privacy-Preserving Mechanisms for SVM Learning
Benjamin I. P. Rubinstein
Peter L. Bartlett
Ling Huang
N. Taft
83
293
0
30 Nov 2009
1