ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.01838
  4. Cited By
High Accuracy and High Fidelity Extraction of Neural Networks

High Accuracy and High Fidelity Extraction of Neural Networks

3 September 2019
Matthew Jagielski
Nicholas Carlini
David Berthelot
Alexey Kurakin
Nicolas Papernot
    MLAU
    MIACV
ArXivPDFHTML

Papers citing "High Accuracy and High Fidelity Extraction of Neural Networks"

33 / 83 papers shown
Title
Defending against Model Stealing via Verifying Embedded External
  Features
Defending against Model Stealing via Verifying Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yong Jiang
Shutao Xia
Xiaochun Cao
AAML
37
61
0
07 Dec 2021
Property Inference Attacks Against GANs
Property Inference Attacks Against GANs
Junhao Zhou
Yufei Chen
Chao Shen
Yang Zhang
AAML
MIACV
28
52
0
15 Nov 2021
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Sitan Chen
Adam R. Klivans
Raghu Meka
MLAU
MLT
42
8
0
08 Nov 2021
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight
  Stealing in Memories
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories
Adnan Siraj Rakin
Md Hafizul Islam Chowdhuryy
Fan Yao
Deliang Fan
AAML
MIACV
42
110
0
08 Nov 2021
Confidential Machine Learning Computation in Untrusted Environments: A
  Systems Security Perspective
Confidential Machine Learning Computation in Untrusted Environments: A Systems Security Perspective
Kha Dinh Duy
Taehyun Noh
Siwon Huh
Hojoon Lee
56
9
0
05 Nov 2021
SoK: Machine Learning Governance
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
32
16
0
20 Sep 2021
Membership Inference Attacks Against Recommender Systems
Membership Inference Attacks Against Recommender Systems
Minxing Zhang
Z. Ren
Zihan Wang
Pengjie Ren
Zhumin Chen
Pengfei Hu
Yang Zhang
MIACV
AAML
18
83
0
16 Sep 2021
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks
Anuj Dubey
Rosario Cammarota
Vikram B. Suresh
Aydin Aysu
AAML
30
30
0
01 Sep 2021
SoK: How Robust is Image Classification Deep Neural Network
  Watermarking? (Extended Version)
SoK: How Robust is Image Classification Deep Neural Network Watermarking? (Extended Version)
Nils Lukas
Edward Jiang
Xinda Li
Florian Kerschbaum
AAML
36
86
0
11 Aug 2021
DeepFreeze: Cold Boot Attacks and High Fidelity Model Recovery on
  Commercial EdgeML Device
DeepFreeze: Cold Boot Attacks and High Fidelity Model Recovery on Commercial EdgeML Device
Yoo-Seung Won
Soham Chatterjee
Dirmanto Jap
A. Basu
S. Bhasin
AAML
FedML
17
12
0
03 Aug 2021
MEGEX: Data-Free Model Extraction Attack against Gradient-Based
  Explainable AI
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI
T. Miura
Satoshi Hasegawa
Toshiki Shibahara
SILM
MIACV
16
37
0
19 Jul 2021
Survey: Leakage and Privacy at Inference Time
Survey: Leakage and Privacy at Inference Time
Marija Jegorova
Chaitanya Kaul
Charlie Mayor
Alison Q. OÑeil
Alexander Weir
Roderick Murray-Smith
Sotirios A. Tsaftaris
PILM
MIACV
17
71
0
04 Jul 2021
HODA: Hardness-Oriented Detection of Model Extraction Attacks
HODA: Hardness-Oriented Detection of Model Extraction Attacks
A. M. Sadeghzadeh
Amir Mohammad Sobhanian
F. Dehghan
R. Jalili
MIACV
17
7
0
21 Jun 2021
A Review of Confidentiality Threats Against Embedded Neural Network
  Models
A Review of Confidentiality Threats Against Embedded Neural Network Models
Raphael Joud
Pierre-Alain Moëllic
Rémi Bernhard
J. Rigaud
28
6
0
04 May 2021
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing
  Attack
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack
Yixu Wang
Jie Li
Hong Liu
Yan Wang
Yongjian Wu
Feiyue Huang
Rongrong Ji
AAML
22
34
0
03 May 2021
Proof-of-Learning: Definitions and Practice
Proof-of-Learning: Definitions and Practice
Hengrui Jia
Mohammad Yaghini
Christopher A. Choquette-Choo
Natalie Dullerud
Anvith Thudi
Varun Chandrasekaran
Nicolas Papernot
AAML
20
98
0
09 Mar 2021
Membership Inference Attacks are Easier on Difficult Problems
Membership Inference Attacks are Easier on Difficult Problems
Avital Shafran
Shmuel Peleg
Yedid Hoshen
MIACV
11
16
0
15 Feb 2021
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Xinlei He
Yang Zhang
11
51
0
08 Feb 2021
Data-Free Model Extraction
Data-Free Model Extraction
Jean-Baptiste Truong
Pratyush Maini
R. Walls
Nicolas Papernot
MIACV
15
181
0
30 Nov 2020
Amnesiac Machine Learning
Amnesiac Machine Learning
Laura Graves
Vineel Nagisetty
Vijay Ganesh
MU
MIACV
16
242
0
21 Oct 2020
Model extraction from counterfactual explanations
Model extraction from counterfactual explanations
Ulrich Aivodji
Alexandre Bolot
Sébastien Gambs
MIACV
MLAU
27
51
0
03 Sep 2020
A Survey of Privacy Attacks in Machine Learning
A Survey of Privacy Attacks in Machine Learning
M. Rigaki
Sebastian Garcia
PILM
AAML
27
213
0
15 Jul 2020
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic
  Speech Recognition and Speaker Identification Systems
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
H. Abdullah
Kevin Warren
Vincent Bindschaedler
Nicolas Papernot
Patrick Traynor
AAML
24
128
0
13 Jul 2020
Improving LIME Robustness with Smarter Locality Sampling
Improving LIME Robustness with Smarter Locality Sampling
Sean Saito
Eugene Chua
Nicholas Capel
Rocco Hu
FAtt
AAML
7
22
0
22 Jun 2020
Stealing Deep Reinforcement Learning Models for Fun and Profit
Stealing Deep Reinforcement Learning Models for Fun and Profit
Kangjie Chen
Shangwei Guo
Tianwei Zhang
Xiaofei Xie
Yang Liu
MLAU
MIACV
OffRL
14
45
0
09 Jun 2020
Perturbing Inputs to Prevent Model Stealing
Perturbing Inputs to Prevent Model Stealing
J. Grana
AAML
SILM
11
5
0
12 May 2020
SNIFF: Reverse Engineering of Neural Networks with Fault Attacks
SNIFF: Reverse Engineering of Neural Networks with Fault Attacks
J. Breier
Dirmanto Jap
Xiaolu Hou
S. Bhasin
Yang Liu
15
52
0
23 Feb 2020
Mind Your Weight(s): A Large-scale Study on Insufficient Machine
  Learning Model Protection in Mobile Apps
Mind Your Weight(s): A Large-scale Study on Insufficient Machine Learning Model Protection in Mobile Apps
Zhichuang Sun
Ruimin Sun
Long Lu
Alan Mislove
31
78
0
18 Feb 2020
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Nils Lukas
Yuxuan Zhang
Florian Kerschbaum
MLAU
FedML
AAML
31
144
0
02 Dec 2019
The Threat of Adversarial Attacks on Machine Learning in Network
  Security -- A Survey
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
26
68
0
06 Nov 2019
Extraction of Complex DNN Models: Real Threat or Boogeyman?
Extraction of Complex DNN Models: Real Threat or Boogeyman?
B. Atli
S. Szyller
Mika Juuti
Samuel Marchal
Nadarajah Asokan
MLAU
MIACV
25
45
0
11 Oct 2019
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
228
1,835
0
03 Feb 2017
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
270
5,660
0
05 Dec 2016
Previous
12