ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.05351
  4. Cited By
Stealing Hyperparameters in Machine Learning

Stealing Hyperparameters in Machine Learning

14 February 2018
Binghui Wang
Neil Zhenqiang Gong
    AAML
ArXivPDFHTML

Papers citing "Stealing Hyperparameters in Machine Learning"

50 / 206 papers shown
Title
Practical Defences Against Model Inversion Attacks for Split Neural
  Networks
Practical Defences Against Model Inversion Attacks for Split Neural Networks
Tom Titcombe
A. Hall
Pavlos Papadopoulos
Daniele Romanini
FedML
27
58
0
12 Apr 2021
Machine Learning Based Cyber Attacks Targeting on Controlled
  Information: A Survey
Machine Learning Based Cyber Attacks Targeting on Controlled Information: A Survey
Yuantian Miao
Chao Chen
Lei Pan
Qing-Long Han
Jun Zhang
Yang Xiang
AAML
49
68
0
16 Feb 2021
Membership Inference Attacks are Easier on Difficult Problems
Membership Inference Attacks are Easier on Difficult Problems
Avital Shafran
Shmuel Peleg
Yedid Hoshen
MIACV
14
16
0
15 Feb 2021
Dompteur: Taming Audio Adversarial Examples
Dompteur: Taming Audio Adversarial Examples
Thorsten Eisenhofer
Lea Schonherr
Joel Frank
Lars Speckemeier
D. Kolossa
Thorsten Holz
AAML
33
24
0
10 Feb 2021
Node-Level Membership Inference Attacks Against Graph Neural Networks
Node-Level Membership Inference Attacks Against Graph Neural Networks
Xinlei He
Rui Wen
Yixin Wu
Michael Backes
Yun Shen
Yang Zhang
11
93
0
10 Feb 2021
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Xinlei He
Yang Zhang
11
51
0
08 Feb 2021
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
  Learning Models
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Yugeng Liu
Rui Wen
Xinlei He
A. Salem
Zhikun Zhang
Michael Backes
Emiliano De Cristofaro
Mario Fritz
Yang Zhang
AAML
15
125
0
04 Feb 2021
Property Inference From Poisoning
Property Inference From Poisoning
Melissa Chase
Esha Ghosh
Saeed Mahloujifar
MIACV
19
77
0
26 Jan 2021
Membership Inference Attack on Graph Neural Networks
Membership Inference Attack on Graph Neural Networks
Iyiola E. Olatunji
Wolfgang Nejdl
Megha Khosla
AAML
38
97
0
17 Jan 2021
Towards a Robust and Trustworthy Machine Learning System Development: An
  Engineering Perspective
Towards a Robust and Trustworthy Machine Learning System Development: An Engineering Perspective
Pulei Xiong
Scott Buffett
Shahrear Iqbal
Philippe Lamontagne
M. Mamun
Heather Molyneaux
OOD
34
15
0
08 Jan 2021
Practical Blind Membership Inference Attack via Differential Comparisons
Practical Blind Membership Inference Attack via Differential Comparisons
Bo Hui
Yuchen Yang
Haolin Yuan
Philippe Burlina
Neil Zhenqiang Gong
Yinzhi Cao
MIACV
30
119
0
05 Jan 2021
Robust Machine Learning Systems: Challenges, Current Trends,
  Perspectives, and the Road Ahead
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Muhammad Shafique
Mahum Naseer
T. Theocharides
C. Kyrkou
O. Mutlu
Lois Orosa
Jungwook Choi
OOD
75
100
0
04 Jan 2021
Robustness Threats of Differential Privacy
Robustness Threats of Differential Privacy
Nurislam Tursynbek
Aleksandr Petiushko
Ivan V. Oseledets
AAML
17
14
0
14 Dec 2020
When Machine Learning Meets Privacy: A Survey and Outlook
When Machine Learning Meets Privacy: A Survey and Outlook
B. Liu
Ming Ding
Sina shaham
W. Rahayu
F. Farokhi
Zihuai Lin
10
281
0
24 Nov 2020
Monitoring-based Differential Privacy Mechanism Against Query-Flooding
  Parameter Duplication Attack
Monitoring-based Differential Privacy Mechanism Against Query-Flooding Parameter Duplication Attack
Haonan Yan
Xiaoguang Li
Hui Li
Jiamin Li
Wenhai Sun
Fenghua Li
AAML
14
1
0
01 Nov 2020
Evaluation of Inference Attack Models for Deep Learning on Medical Data
Evaluation of Inference Attack Models for Deep Learning on Medical Data
Maoqiang Wu
Xinyue Zhang
Jiahao Ding
H. Nguyen
Rong Yu
M. Pan
Stephen T. C. Wong
MIACV
12
18
0
31 Oct 2020
Robust and Verifiable Information Embedding Attacks to Deep Neural
  Networks via Error-Correcting Codes
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
27
5
0
26 Oct 2020
Exploring the Security Boundary of Data Reconstruction via Neuron
  Exclusivity Analysis
Exploring the Security Boundary of Data Reconstruction via Neuron Exclusivity Analysis
Xudong Pan
Mi Zhang
Yifan Yan
Jiaming Zhu
Zhemin Yang
AAML
6
21
0
26 Oct 2020
CryptoGRU: Low Latency Privacy-Preserving Text Analysis With GRU
CryptoGRU: Low Latency Privacy-Preserving Text Analysis With GRU
Bo Feng
Qian Lou
Lei Jiang
Geoffrey C. Fox
14
15
0
22 Oct 2020
Black-Box Ripper: Copying black-box models using generative evolutionary
  algorithms
Black-Box Ripper: Copying black-box models using generative evolutionary algorithms
Antonio Bărbălău
Adrian Cosma
Radu Tudor Ionescu
Marius Popescu
MIACV
MLAU
22
43
0
21 Oct 2020
A survey of algorithmic recourse: definitions, formulations, solutions,
  and prospects
A survey of algorithmic recourse: definitions, formulations, solutions, and prospects
Amir-Hossein Karimi
Gilles Barthe
Bernhard Schölkopf
Isabel Valera
FaML
14
172
0
08 Oct 2020
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural
  Networks
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks
A. Salem
Michael Backes
Yang Zhang
6
34
0
07 Oct 2020
Quantifying Privacy Leakage in Graph Embedding
Quantifying Privacy Leakage in Graph Embedding
Vasisht Duddu
A. Boutet
Virat Shejwalkar
MIACV
13
119
0
02 Oct 2020
ES Attack: Model Stealing against Deep Neural Networks without Data
  Hurdles
ES Attack: Model Stealing against Deep Neural Networks without Data Hurdles
Xiaoyong Yuan
Lei Ding
Lan Zhang
Xiaolin Li
D. Wu
15
40
0
21 Sep 2020
Local and Central Differential Privacy for Robustness and Privacy in
  Federated Learning
Local and Central Differential Privacy for Robustness and Privacy in Federated Learning
Mohammad Naseri
Jamie Hayes
Emiliano De Cristofaro
FedML
25
144
0
08 Sep 2020
Model extraction from counterfactual explanations
Model extraction from counterfactual explanations
Ulrich Aivodji
Alexandre Bolot
Sébastien Gambs
MIACV
MLAU
27
51
0
03 Sep 2020
Simulating Unknown Target Models for Query-Efficient Black-box Attacks
Simulating Unknown Target Models for Query-Efficient Black-box Attacks
Chen Ma
L. Chen
Junhai Yong
MLAU
OOD
41
17
0
02 Sep 2020
DeepPeep: Exploiting Design Ramifications to Decipher the Architecture
  of Compact DNNs
DeepPeep: Exploiting Design Ramifications to Decipher the Architecture of Compact DNNs
N. Jha
Sparsh Mittal
Binod Kumar
Govardhan Mattela
AAML
13
12
0
30 Jul 2020
Label-Only Membership Inference Attacks
Label-Only Membership Inference Attacks
Christopher A. Choquette-Choo
Florian Tramèr
Nicholas Carlini
Nicolas Papernot
MIACV
MIALM
24
493
0
28 Jul 2020
A Survey of Privacy Attacks in Machine Learning
A Survey of Privacy Attacks in Machine Learning
M. Rigaki
Sebastian Garcia
PILM
AAML
27
213
0
15 Jul 2020
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic
  Speech Recognition and Speaker Identification Systems
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
H. Abdullah
Kevin Warren
Vincent Bindschaedler
Nicolas Papernot
Patrick Traynor
AAML
24
128
0
13 Jul 2020
Adversarial Machine Learning Attacks and Defense Methods in the Cyber
  Security Domain
Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
Ishai Rosenberg
A. Shabtai
Yuval Elovici
L. Rokach
AAML
26
12
0
05 Jul 2020
Generating Adversarial Examples with Controllable Non-transferability
Generating Adversarial Examples with Controllable Non-transferability
Renzhi Wang
Tianwei Zhang
Xiaofei Xie
L. Ma
Cong Tian
Felix Juefei Xu
Yang Liu
SILM
AAML
17
3
0
02 Jul 2020
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
Yuankun Zhu
Yueqiang Cheng
Husheng Zhou
Yantao Lu
MIACV
AAML
31
99
0
23 Jun 2020
SPEED: Secure, PrivatE, and Efficient Deep learning
SPEED: Secure, PrivatE, and Efficient Deep learning
Arnaud Grivet Sébert
Rafael Pinot
Martin Zuber
Cédric Gouy-Pailler
Renaud Sirdey
FedML
11
20
0
16 Jun 2020
Stealing Deep Reinforcement Learning Models for Fun and Profit
Stealing Deep Reinforcement Learning Models for Fun and Profit
Kangjie Chen
Shangwei Guo
Tianwei Zhang
Xiaofei Xie
Yang Liu
MLAU
MIACV
OffRL
21
45
0
09 Jun 2020
Detecting and Understanding Real-World Differential Performance Bugs in
  Machine Learning Libraries
Detecting and Understanding Real-World Differential Performance Bugs in Machine Learning Libraries
Saeid Tizpaz-Niari
Pavol Cerný
Ashutosh Trivedi
6
24
0
03 Jun 2020
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving
  Improvements
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements
Xiaoyi Chen
A. Salem
Dingfan Chen
Michael Backes
Shiqing Ma
Qingni Shen
Zhonghai Wu
Yang Zhang
SILM
24
224
0
01 Jun 2020
Revisiting Membership Inference Under Realistic Assumptions
Revisiting Membership Inference Under Realistic Assumptions
Bargav Jayaraman
Lingxiao Wang
Katherine Knipmeyer
Quanquan Gu
David E. Evans
11
147
0
21 May 2020
An Overview of Privacy in Machine Learning
An Overview of Privacy in Machine Learning
Emiliano De Cristofaro
SILM
22
83
0
18 May 2020
Perturbing Inputs to Prevent Model Stealing
Perturbing Inputs to Prevent Model Stealing
J. Grana
AAML
SILM
13
5
0
12 May 2020
Defending Model Inversion and Membership Inference Attacks via
  Prediction Purification
Defending Model Inversion and Membership Inference Attacks via Prediction Purification
Ziqi Yang
Bin Shao
Bohan Xuan
E. Chang
Fan Zhang
AAML
17
71
0
08 May 2020
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient
  Estimation
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Sanjay Kariyappa
A. Prakash
Moinuddin K. Qureshi
AAML
21
146
0
06 May 2020
When Machine Unlearning Jeopardizes Privacy
When Machine Unlearning Jeopardizes Privacy
Min Chen
Zhikun Zhang
Tianhao Wang
Michael Backes
Mathias Humbert
Yang Zhang
MIACV
23
216
0
05 May 2020
Enhancing network forensics with particle swarm and deep learning: The
  particle deep framework
Enhancing network forensics with particle swarm and deep learning: The particle deep framework
Nickolaos Koroniotis
Nour Moustafa
6
34
0
02 May 2020
Privacy in Deep Learning: A Survey
Privacy in Deep Learning: A Survey
Fatemehsadat Mirshghallah
Mohammadkazem Taram
Praneeth Vepakomma
Abhishek Singh
Ramesh Raskar
H. Esmaeilzadeh
FedML
8
135
0
25 Apr 2020
ENSEI: Efficient Secure Inference via Frequency-Domain Homomorphic
  Convolution for Privacy-Preserving Visual Recognition
ENSEI: Efficient Secure Inference via Frequency-Domain Homomorphic Convolution for Privacy-Preserving Visual Recognition
S. Bian
Tianchen Wang
Masayuki Hiromoto
Yiyu Shi
Takashi Sato
FedML
8
30
0
11 Mar 2020
Cryptanalytic Extraction of Neural Network Models
Cryptanalytic Extraction of Neural Network Models
Nicholas Carlini
Matthew Jagielski
Ilya Mironov
FedML
MLAU
MIACV
AAML
70
134
0
10 Mar 2020
Dynamic Backdoor Attacks Against Machine Learning Models
Dynamic Backdoor Attacks Against Machine Learning Models
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
AAML
16
269
0
07 Mar 2020
Stealing Black-Box Functionality Using The Deep Neural Tree Architecture
Stealing Black-Box Functionality Using The Deep Neural Tree Architecture
Daniel Teitelman
I. Naeh
Shie Mannor
6
5
0
23 Feb 2020
Previous
12345
Next