ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.05351
  4. Cited By
Stealing Hyperparameters in Machine Learning

Stealing Hyperparameters in Machine Learning

14 February 2018
Binghui Wang
Neil Zhenqiang Gong
    AAML
ArXivPDFHTML

Papers citing "Stealing Hyperparameters in Machine Learning"

50 / 206 papers shown
Title
SNIFF: Reverse Engineering of Neural Networks with Fault Attacks
SNIFF: Reverse Engineering of Neural Networks with Fault Attacks
J. Breier
Dirmanto Jap
Xiaolu Hou
S. Bhasin
Yang Liu
15
52
0
23 Feb 2020
Influence Function based Data Poisoning Attacks to Top-N Recommender
  Systems
Influence Function based Data Poisoning Attacks to Top-N Recommender Systems
Minghong Fang
Neil Zhenqiang Gong
Jia-Wei Liu
TDI
8
154
0
19 Feb 2020
Mind Your Weight(s): A Large-scale Study on Insufficient Machine
  Learning Model Protection in Mobile Apps
Mind Your Weight(s): A Large-scale Study on Insufficient Machine Learning Model Protection in Mobile Apps
Zhichuang Sun
Ruimin Sun
Long Lu
Alan Mislove
31
78
0
18 Feb 2020
Mitigating Query-Flooding Parameter Duplication Attack on Regression Models with High-Dimensional Gaussian Mechanism
Xiaoguang Li
Hui Li
Haonan Yan
Zelei Cheng
Wenhai Sun
Hui Zhu
AAML
12
1
0
06 Feb 2020
Model Extraction Attacks against Recurrent Neural Networks
Model Extraction Attacks against Recurrent Neural Networks
Tatsuya Takemura
Naoto Yanai
T. Fujiwara
MLAU
MIACV
AAML
15
15
0
01 Feb 2020
Adversarial Model Extraction on Graph Neural Networks
Adversarial Model Extraction on Graph Neural Networks
David DeFazio
Arti Ramesh
AAML
MLAU
12
20
0
16 Dec 2019
Towards Security Threats of Deep Learning Systems: A Survey
Towards Security Threats of Deep Learning Systems: A Survey
Yingzhe He
Guozhu Meng
Kai Chen
Xingbo Hu
Jinwen He
AAML
ELM
11
14
0
28 Nov 2019
Survey of Attacks and Defenses on Edge-Deployed Neural Networks
Survey of Attacks and Defenses on Edge-Deployed Neural Networks
Mihailo Isakov
V. Gadepally
K. Gettings
Michel A. Kinsy
AAML
12
31
0
27 Nov 2019
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
Minghong Fang
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
OOD
FedML
46
1,070
0
26 Nov 2019
CHEETAH: An Ultra-Fast, Approximation-Free, and Privacy-Preserved Neural
  Network Framework based on Joint Obscure Linear and Nonlinear Computations
CHEETAH: An Ultra-Fast, Approximation-Free, and Privacy-Preserved Neural Network Framework based on Joint Obscure Linear and Nonlinear Computations
Qiao Zhang
Cong Wang
Chunsheng Xin
Hongyi Wu
13
4
0
12 Nov 2019
Quantifying (Hyper) Parameter Leakage in Machine Learning
Quantifying (Hyper) Parameter Leakage in Machine Learning
Vasisht Duddu
D. V. Rao
AAML
MIACV
FedML
28
5
0
31 Oct 2019
MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel
  Protection
MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection
Anuj Dubey
Rosario Cammarota
Aydin Aysu
AAML
16
78
0
29 Oct 2019
IPGuard: Protecting Intellectual Property of Deep Neural Networks via
  Fingerprinting the Classification Boundary
IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
11
106
0
28 Oct 2019
Secure Evaluation of Quantized Neural Networks
Secure Evaluation of Quantized Neural Networks
Anders Dalskov
Daniel E. Escudero
Marcel Keller
12
137
0
28 Oct 2019
Piracy Resistant Watermarks for Deep Neural Networks
Piracy Resistant Watermarks for Deep Neural Networks
Huiying Li
Emily Willson
Shawn Shan
B. Ye
Shehroz S. Khan
26
26
0
02 Oct 2019
Robust Membership Encoding: Inference Attacks and Copyright Protection
  for Deep Learning
Robust Membership Encoding: Inference Attacks and Copyright Protection for Deep Learning
Congzheng Song
Reza Shokri
MIACV
11
5
0
27 Sep 2019
GAMIN: An Adversarial Approach to Black-Box Model Inversion
GAMIN: An Adversarial Approach to Black-Box Model Inversion
Ulrich Aivodji
Sébastien Gambs
Timon Ther
MLAU
25
42
0
26 Sep 2019
MemGuard: Defending against Black-Box Membership Inference Attacks via
  Adversarial Examples
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
Jinyuan Jia
Ahmed Salem
Michael Backes
Yang Zhang
Neil Zhenqiang Gong
11
383
0
23 Sep 2019
High Accuracy and High Fidelity Extraction of Neural Networks
High Accuracy and High Fidelity Extraction of Neural Networks
Matthew Jagielski
Nicholas Carlini
David Berthelot
Alexey Kurakin
Nicolas Papernot
MLAU
MIACV
34
370
0
03 Sep 2019
Big Data Analytics for Large Scale Wireless Networks: Challenges and
  Opportunities
Big Data Analytics for Large Scale Wireless Networks: Challenges and Opportunities
Hongning Dai
Raymond Chi-Wing Wong
Hao Wang
Zibin Zheng
A. Vasilakos
AI4CE
GNN
14
65
0
02 Sep 2019
Imperio: Robust Over-the-Air Adversarial Examples for Automatic Speech
  Recognition Systems
Imperio: Robust Over-the-Air Adversarial Examples for Automatic Speech Recognition Systems
Lea Schonherr
Thorsten Eisenhofer
Steffen Zeiler
Thorsten Holz
D. Kolossa
AAML
41
63
0
05 Aug 2019
Adversarial Security Attacks and Perturbations on Machine Learning and
  Deep Learning Methods
Adversarial Security Attacks and Perturbations on Machine Learning and Deep Learning Methods
Arif Siddiqi
AAML
11
11
0
17 Jul 2019
Prediction Poisoning: Towards Defenses Against DNN Model Stealing
  Attacks
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
Tribhuvanesh Orekondy
Bernt Schiele
Mario Fritz
AAML
9
164
0
26 Jun 2019
On the Robustness of the Backdoor-based Watermarking in Deep Neural
  Networks
On the Robustness of the Backdoor-based Watermarking in Deep Neural Networks
Masoumeh Shafieinejad
Jiaqi Wang
Nils Lukas
Xinda Li
Florian Kerschbaum
AAML
22
8
0
18 Jun 2019
Membership Privacy for Machine Learning Models Through Knowledge
  Transfer
Membership Privacy for Machine Learning Models Through Knowledge Transfer
Virat Shejwalkar
Amir Houmansadr
14
10
0
15 Jun 2019
Privacy Risks of Securing Machine Learning Models against Adversarial
  Examples
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILM
MIACV
AAML
6
235
0
24 May 2019
A framework for the extraction of Deep Neural Networks by leveraging
  public data
A framework for the extraction of Deep Neural Networks by leveraging public data
Soham Pal
Yash Gupta
Aditya Shukla
Aditya Kanade
S. Shevade
V. Ganapathy
FedML
MLAU
MIACV
19
56
0
22 May 2019
Assuring the Machine Learning Lifecycle: Desiderata, Methods, and
  Challenges
Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges
Rob Ashmore
R. Calinescu
Colin Paterson
AI4TS
21
116
0
10 May 2019
Updates-Leak: Data Set Inference and Reconstruction Attacks in Online
  Learning
Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning
A. Salem
Apratim Bhattacharyya
Michael Backes
Mario Fritz
Yang Zhang
FedML
AAML
MIACV
17
250
0
01 Apr 2019
Neural Network Model Extraction Attacks in Edge Devices by Hearing
  Architectural Hints
Neural Network Model Extraction Attacks in Edge Devices by Hearing Architectural Hints
Xing Hu
Ling Liang
Lei Deng
Shuangchen Li
Xinfeng Xie
Yu Ji
Yufei Ding
Chang Liu
T. Sherwood
Yuan Xie
AAML
MLAU
21
36
0
10 Mar 2019
Attacking Graph-based Classification via Manipulating the Graph
  Structure
Attacking Graph-based Classification via Manipulating the Graph Structure
Binghui Wang
Neil Zhenqiang Gong
AAML
24
152
0
01 Mar 2019
Evaluating Differentially Private Machine Learning in Practice
Evaluating Differentially Private Machine Learning in Practice
Bargav Jayaraman
David E. Evans
13
7
0
24 Feb 2019
Stealing Neural Networks via Timing Side Channels
Stealing Neural Networks via Timing Side Channels
Vasisht Duddu
D. Samanta
D. V. Rao
V. Balas
AAML
MLAU
FedML
20
133
0
31 Dec 2018
Privacy Partitioning: Protecting User Data During the Deep Learning
  Inference Phase
Privacy Partitioning: Protecting User Data During the Deep Learning Inference Phase
Jianfeng Chi
Emmanuel Owusu
Xuwang Yin
Tong Yu
William Chan
P. Tague
Yuan Tian
FedML
17
28
0
07 Dec 2018
Knockoff Nets: Stealing Functionality of Black-Box Models
Knockoff Nets: Stealing Functionality of Black-Box Models
Tribhuvanesh Orekondy
Bernt Schiele
Mario Fritz
MLAU
6
525
0
06 Dec 2018
Comprehensive Privacy Analysis of Deep Learning: Passive and Active
  White-box Inference Attacks against Centralized and Federated Learning
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
Milad Nasr
Reza Shokri
Amir Houmansadr
FedML
MIACV
AAML
13
243
0
03 Dec 2018
Disentangling Adversarial Robustness and Generalization
Disentangling Adversarial Robustness and Generalization
David Stutz
Matthias Hein
Bernt Schiele
AAML
OOD
188
272
0
03 Dec 2018
Fighting Fire with Fire: Using Antidote Data to Improve Polarization and
  Fairness of Recommender Systems
Fighting Fire with Fire: Using Antidote Data to Improve Polarization and Fairness of Recommender Systems
Bashir Rastegarpanah
Krishna P. Gummadi
M. Crovella
16
119
0
02 Dec 2018
Exploring Connections Between Active Learning and Model Extraction
Exploring Connections Between Active Learning and Model Extraction
Varun Chandrasekaran
Kamalika Chaudhuri
Irene Giacomelli
Shane Walker
Songbai Yan
MIACV
14
157
0
05 Nov 2018
Auditing Data Provenance in Text-Generation Models
Auditing Data Provenance in Text-Generation Models
Congzheng Song
Vitaly Shmatikov
MLAU
9
17
0
01 Nov 2018
Security Matters: A Survey on Adversarial Machine Learning
Security Matters: A Survey on Adversarial Machine Learning
Guofu Li
Pengjia Zhu
Jin Li
Zhemin Yang
Ning Cao
Zhiyi Chen
AAML
16
24
0
16 Oct 2018
Security Analysis of Deep Neural Networks Operating in the Presence of
  Cache Side-Channel Attacks
Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks
Sanghyun Hong
Michael Davinroy
Yigitcan Kaya
S. Locke
Ian Rackow
Kevin Kulda
Dana Dachman-Soled
Tudor Dumitras
MIACV
25
90
0
08 Oct 2018
Data-Driven Debugging for Functional Side Channels
Data-Driven Debugging for Functional Side Channels
Saeid Tizpaz-Niari
Pavol Cerný
Ashutosh Trivedi
14
14
0
30 Aug 2018
Adversarial Attacks Against Automatic Speech Recognition Systems via
  Psychoacoustic Hiding
Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding
Lea Schonherr
Katharina Kohls
Steffen Zeiler
Thorsten Holz
D. Kolossa
AAML
22
285
0
16 Aug 2018
Cache Telepathy: Leveraging Shared Resource Attacks to Learn DNN
  Architectures
Cache Telepathy: Leveraging Shared Resource Attacks to Learn DNN Architectures
Mengjia Yan
Christopher W. Fletcher
Josep Torrellas
MIACV
FedML
15
244
0
14 Aug 2018
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service
L. Hanzlik
Yang Zhang
Kathrin Grosse
A. Salem
Maximilian Augustin
Michael Backes
Mario Fritz
OffRL
14
103
0
01 Aug 2018
Security and Privacy Issues in Deep Learning
Security and Privacy Issues in Deep Learning
Ho Bae
Jaehee Jang
Dahuin Jung
Hyemi Jang
Heonseok Ha
Hyungyu Lee
Sungroh Yoon
SILM
MIACV
38
77
0
31 Jul 2018
Machine Learning with Membership Privacy using Adversarial
  Regularization
Machine Learning with Membership Privacy using Adversarial Regularization
Milad Nasr
Reza Shokri
Amir Houmansadr
FedML
MIACV
6
465
0
16 Jul 2018
Privacy-preserving Machine Learning through Data Obfuscation
Privacy-preserving Machine Learning through Data Obfuscation
Tianwei Zhang
Zecheng He
R. Lee
12
79
0
05 Jul 2018
ML-Leaks: Model and Data Independent Membership Inference Attacks and
  Defenses on Machine Learning Models
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
A. Salem
Yang Zhang
Mathias Humbert
Pascal Berrang
Mario Fritz
Michael Backes
MIACV
MIALM
16
925
0
04 Jun 2018
Previous
12345
Next