ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.05351
  4. Cited By
Stealing Hyperparameters in Machine Learning

Stealing Hyperparameters in Machine Learning

14 February 2018
Binghui Wang
Neil Zhenqiang Gong
    AAML
ArXivPDFHTML

Papers citing "Stealing Hyperparameters in Machine Learning"

50 / 206 papers shown
Title
Memorization of Named Entities in Fine-tuned BERT Models
Memorization of Named Entities in Fine-tuned BERT Models
Andor Diera
N. Lell
Aygul Garifullina
A. Scherp
15
0
0
07 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
Model Extraction Attack against Self-supervised Speech Models
Model Extraction Attack against Self-supervised Speech Models
Tsung-Yuan Hsu
Chen An Li
Tung-Yu Wu
Hung-yi Lee
19
1
0
29 Nov 2022
Federated Learning Attacks and Defenses: A Survey
Federated Learning Attacks and Defenses: A Survey
Yao Chen
Yijie Gui
Hong Lin
Wensheng Gan
Yongdong Wu
FedML
38
29
0
27 Nov 2022
On the Vulnerability of Data Points under Multiple Membership Inference
  Attacks and Target Models
On the Vulnerability of Data Points under Multiple Membership Inference Attacks and Target Models
Mauro Conti
Jiaxin Li
S. Picek
MIALM
32
2
0
28 Oct 2022
New data poison attacks on machine learning classifiers for mobile
  exfiltration
New data poison attacks on machine learning classifiers for mobile exfiltration
M. A. Ramírez
Sangyoung Yoon
Ernesto Damiani
H. A. Hamadi
C. Ardagna
Nicola Bena
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
17
4
0
20 Oct 2022
Privacy Attacks Against Biometric Models with Fewer Samples:
  Incorporating the Output of Multiple Models
Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models
Sohaib Ahmad
Benjamin Fuller
Kaleel Mahmood
AAML
14
0
0
22 Sep 2022
PINCH: An Adversarial Extraction Attack Framework for Deep Learning
  Models
PINCH: An Adversarial Extraction Attack Framework for Deep Learning Models
William Hackett
Stefan Trawicki
Zhengxin Yu
N. Suri
Peter Garraghan
MIACV
AAML
13
3
0
13 Sep 2022
Privacy of Autonomous Vehicles: Risks, Protection Methods, and Future
  Directions
Privacy of Autonomous Vehicles: Risks, Protection Methods, and Future Directions
Chulin Xie
Zhong Cao
Yunhui Long
Diange Yang
Ding Zhao
Bo-wen Li
11
4
0
08 Sep 2022
Demystifying Arch-hints for Model Extraction: An Attack in Unified
  Memory System
Demystifying Arch-hints for Model Extraction: An Attack in Unified Memory System
Zhendong Wang
Xiaoming Zeng
Xulong Tang
Danfeng Zhang
Xingbo Hu
Yang Hu
AAML
MIACV
FedML
24
6
0
29 Aug 2022
Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning
Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning
Xinlei He
Hongbin Liu
Neil Zhenqiang Gong
Yang Zhang
AAML
MIACV
10
14
0
25 Jul 2022
I Know What You Trained Last Summer: A Survey on Stealing Machine
  Learning Models and Defences
I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
39
106
0
16 Jun 2022
Reconstructing Training Data from Trained Neural Networks
Reconstructing Training Data from Trained Neural Networks
Niv Haim
Gal Vardi
Gilad Yehudai
Ohad Shamir
Michal Irani
40
132
0
15 Jun 2022
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference
  Attacks
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Nuo Xu
Binghui Wang
Ran Ran
Wujie Wen
Parv Venkitasubramaniam
AAML
18
5
0
11 Jun 2022
One Picture is Worth a Thousand Words: A New Wallet Recovery Process
One Picture is Worth a Thousand Words: A New Wallet Recovery Process
H. Chabanne
Vincent Despiegel
Linda Guiga
22
0
0
05 May 2022
Cracking White-box DNN Watermarks via Invariant Neuron Transforms
Cracking White-box DNN Watermarks via Invariant Neuron Transforms
Yifan Yan
Xudong Pan
Yining Wang
Mi Zhang
Min Yang
AAML
17
14
0
30 Apr 2022
Enhancing Privacy against Inversion Attacks in Federated Learning by
  using Mixing Gradients Strategies
Enhancing Privacy against Inversion Attacks in Federated Learning by using Mixing Gradients Strategies
Shaltiel Eloul
Fran Silavong
Sanket Kamthe
Antonios Georgiadis
Sean J. Moran
FedML
15
5
0
26 Apr 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
17
37
0
21 Feb 2022
Trusted AI in Multi-agent Systems: An Overview of Privacy and Security
  for Distributed Learning
Trusted AI in Multi-agent Systems: An Overview of Privacy and Security for Distributed Learning
Chuan Ma
Jun Li
Kang Wei
Bo Liu
Ming Ding
Long Yuan
Zhu Han
H. Vincent Poor
49
42
0
18 Feb 2022
Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image
  Encoders
Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders
Zeyang Sha
Xinlei He
Ning Yu
Michael Backes
Yang Zhang
25
34
0
19 Jan 2022
Evaluating the Security of Open Radio Access Networks
Evaluating the Security of Open Radio Access Networks
D. Mimran
Ron Bitton
Y. Kfir
Eitan Klevansky
Oleg Brodt
Heiko Lehmann
Yuval Elovici
A. Shabtai
37
23
0
16 Jan 2022
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
Yupei Liu
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
MIACV
8
25
0
15 Jan 2022
Security for Machine Learning-based Software Systems: a survey of
  threats, practices and challenges
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges
Huaming Chen
Muhammad Ali Babar
AAML
31
21
0
12 Jan 2022
SoK: A Study of the Security on Voice Processing Systems
SoK: A Study of the Security on Voice Processing Systems
Robert Chang
Logan Kuo
Arthur Liu
Nader Sehatbakhsh
16
0
0
24 Dec 2021
Model Stealing Attacks Against Inductive Graph Neural Networks
Model Stealing Attacks Against Inductive Graph Neural Networks
Yun Shen
Xinlei He
Yufei Han
Yang Zhang
16
60
0
15 Dec 2021
Membership Inference Attacks From First Principles
Membership Inference Attacks From First Principles
Nicholas Carlini
Steve Chien
Milad Nasr
Shuang Song
Andreas Terzis
Florian Tramèr
MIACV
MIALM
24
639
0
07 Dec 2021
Mitigating Adversarial Attacks by Distributing Different Copies to
  Different Users
Mitigating Adversarial Attacks by Distributing Different Copies to Different Users
Jiyi Zhang
Hansheng Fang
W. Tann
Ke Xu
Chengfang Fang
E. Chang
AAML
21
3
0
30 Nov 2021
Property Inference Attacks Against GANs
Property Inference Attacks Against GANs
Junhao Zhou
Yufei Chen
Chao Shen
Yang Zhang
AAML
MIACV
28
52
0
15 Nov 2021
Get a Model! Model Hijacking Attack Against Machine Learning Models
Get a Model! Model Hijacking Attack Against Machine Learning Models
A. Salem
Michael Backes
Yang Zhang
AAML
15
28
0
08 Nov 2021
Confidential Machine Learning Computation in Untrusted Environments: A
  Systems Security Perspective
Confidential Machine Learning Computation in Untrusted Environments: A Systems Security Perspective
Kha Dinh Duy
Taehyun Noh
Siwon Huh
Hojoon Lee
56
9
0
05 Nov 2021
Optimizing Secure Decision Tree Inference Outsourcing
Optimizing Secure Decision Tree Inference Outsourcing
Yifeng Zheng
Cong Wang
Ruochen Wang
Huayi Duan
Surya Nepal
11
6
0
31 Oct 2021
10 Security and Privacy Problems in Large Foundation Models
10 Security and Privacy Problems in Large Foundation Models
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
11
7
0
28 Oct 2021
Mitigating Membership Inference Attacks by Self-Distillation Through a
  Novel Ensemble Architecture
Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture
Xinyu Tang
Saeed Mahloujifar
Liwei Song
Virat Shejwalkar
Milad Nasr
Amir Houmansadr
Prateek Mittal
19
74
0
15 Oct 2021
Bandwidth Utilization Side-Channel on ML Inference Accelerators
Bandwidth Utilization Side-Channel on ML Inference Accelerators
Sarbartha Banerjee
Shijia Wei
Prakash Ramrakhyani
Mohit Tiwari
20
3
0
14 Oct 2021
First to Possess His Statistics: Data-Free Model Extraction Attack on
  Tabular Data
First to Possess His Statistics: Data-Free Model Extraction Attack on Tabular Data
Masataka Tasumi
Kazuki Iwahana
Naoto Yanai
Katsunari Shishido
Toshiya Shimizu
Yuji Higuchi
I. Morikawa
Jun Yajima
AAML
28
4
0
30 Sep 2021
Can one hear the shape of a neural network?: Snooping the GPU via
  Magnetic Side Channel
Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel
H. Maia
Chang Xiao
Dingzeyu Li
E. Grinspun
Changxi Zheng
AAML
34
27
0
15 Sep 2021
Formalizing and Estimating Distribution Inference Risks
Formalizing and Estimating Distribution Inference Risks
Anshuman Suri
David E. Evans
MIACV
26
51
0
13 Sep 2021
EncoderMI: Membership Inference against Pre-trained Encoders in
  Contrastive Learning
EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Wenjie Qu
Neil Zhenqiang Gong
4
94
0
25 Aug 2021
"Adversarial Examples" for Proof-of-Learning
"Adversarial Examples" for Proof-of-Learning
Rui Zhang
Jian-wei Liu
Yuan Ding
Zhibo Wu
Qing Wu
K. Ren
AAML
20
32
0
21 Aug 2021
Generative Models for Security: Attacks, Defenses, and Opportunities
Generative Models for Security: Attacks, Defenses, and Opportunities
L. A. Bauer
Vincent Bindschaedler
23
4
0
21 Jul 2021
NeurObfuscator: A Full-stack Obfuscation Tool to Mitigate Neural
  Architecture Stealing
NeurObfuscator: A Full-stack Obfuscation Tool to Mitigate Neural Architecture Stealing
Jingtao Li
Zhezhi He
Adnan Siraj Rakin
Deliang Fan
C. Chakrabarti
22
24
0
20 Jul 2021
Survey: Leakage and Privacy at Inference Time
Survey: Leakage and Privacy at Inference Time
Marija Jegorova
Chaitanya Kaul
Charlie Mayor
Alison Q. OÑeil
Alexander Weir
Roderick Murray-Smith
Sotirios A. Tsaftaris
PILM
MIACV
17
71
0
04 Jul 2021
The Threat of Offensive AI to Organizations
The Threat of Offensive AI to Organizations
Yisroel Mirsky
Ambra Demontis
J. Kotak
Ram Shankar
Deng Gelei
Liu Yang
X. Zhang
Wenke Lee
Yuval Elovici
Battista Biggio
33
81
0
30 Jun 2021
HODA: Hardness-Oriented Detection of Model Extraction Attacks
HODA: Hardness-Oriented Detection of Model Extraction Attacks
A. M. Sadeghzadeh
Amir Mohammad Sobhanian
F. Dehghan
R. Jalili
MIACV
17
7
0
21 Jun 2021
Killing One Bird with Two Stones: Model Extraction and Attribute
  Inference Attacks against BERT-based APIs
Killing One Bird with Two Stones: Model Extraction and Attribute Inference Attacks against BERT-based APIs
Chen Chen
Xuanli He
Lingjuan Lyu
Fangzhao Wu
SILM
MIACV
57
7
0
23 May 2021
Privacy Inference Attacks and Defenses in Cloud-based Deep Neural
  Network: A Survey
Privacy Inference Attacks and Defenses in Cloud-based Deep Neural Network: A Survey
Xiaoyu Zhang
Chao Chen
Yi Xie
Xiaofeng Chen
Jun Zhang
Yang Xiang
FedML
22
7
0
13 May 2021
GALA: Greedy ComputAtion for Linear Algebra in Privacy-Preserved Neural
  Networks
GALA: Greedy ComputAtion for Linear Algebra in Privacy-Preserved Neural Networks
Qiao Zhang
Chunsheng Xin
Hongyi Wu
25
49
0
05 May 2021
Property Inference Attacks on Convolutional Neural Networks: Influence
  and Implications of Target Model's Complexity
Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity
Mathias Parisot
Balázs Pejó
Dayana Spagnuelo
MIACV
19
33
0
27 Apr 2021
Turning Federated Learning Systems Into Covert Channels
Turning Federated Learning Systems Into Covert Channels
Gabriele Costa
Fabio Pinelli
S. Soderi
Gabriele Tolomei
FedML
37
10
0
21 Apr 2021
Membership Inference Attacks on Knowledge Graphs
Membership Inference Attacks on Knowledge Graphs
Yu Wang
Lifu Huang
Philip S. Yu
Lichao Sun
MIACV
20
15
0
16 Apr 2021
Previous
12345
Next