ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.12784
  4. Cited By
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
v1v2 (latest)

Hermes Attack: Steal DNN Models with Lossless Inference Accuracy

USENIX Security Symposium (USENIX Security), 2020
23 June 2020
Yuankun Zhu
Yueqiang Cheng
Husheng Zhou
Yantao Lu
    MIACVAAML
ArXiv (abs)PDFHTML

Papers citing "Hermes Attack: Steal DNN Models with Lossless Inference Accuracy"

45 / 45 papers shown
Stealing AI Model Weights Through Covert Communication Channels
Stealing AI Model Weights Through Covert Communication Channels
Valentin Barbaza
Alan Rodrigo Diaz-Rizo
Hassan Aboushady
Spyridon Raptis
Haralampos-G. Stratigopoulos
148
1
0
30 Sep 2025
Intellectual Property in Graph-Based Machine Learning as a Service: Attacks and Defenses
Intellectual Property in Graph-Based Machine Learning as a Service: Attacks and Defenses
Lincan Li
Bolin Shen
Chenxi Zhao
Yuxiang Sun
Kaixiang Zhao
Shirui Pan
Yushun Dong
302
1
0
27 Aug 2025
Surveying the Operational Cybersecurity and Supply Chain Threat Landscape when Developing and Deploying AI Systems
Surveying the Operational Cybersecurity and Supply Chain Threat Landscape when Developing and Deploying AI Systems
Michael R. Smith
J. Ingram
194
1
0
27 Aug 2025
Hot-Swap MarkBoard: An Efficient Black-box Watermarking Approach for Large-scale Model Distribution
Hot-Swap MarkBoard: An Efficient Black-box Watermarking Approach for Large-scale Model Distribution
Zhicheng Zhang
Peizhuo Lv
Mengke Wan
Jiang Fang
Diandian Guo
Yezeng Chen
Yinlong Liu
Wei Ma
Jiyan Sun
Liru Geng
317
0
0
28 Jul 2025
DREAM: Domain-agnostic Reverse Engineering Attributes of Black-box Model
DREAM: Domain-agnostic Reverse Engineering Attributes of Black-box ModelIEEE Transactions on Knowledge and Data Engineering (TKDE), 2024
Rongqing Li
Jiaqi Yu
Changsheng Li
Tong Lu
Ye Yuan
Guoren Wang
MLAU
415
1
0
08 Dec 2024
SoK: A Systems Perspective on Compound AI Threats and Countermeasures
SoK: A Systems Perspective on Compound AI Threats and Countermeasures
Sarbartha Banerjee
Prateek Sahu
Mulong Luo
Anjo Vahldiek-Oberwagner
N. Yadwadkar
Mohit Tiwari
AAML
395
3
0
20 Nov 2024
TEESlice: Protecting Sensitive Neural Network Models in Trusted
  Execution Environments When Attackers have Pre-Trained Models
TEESlice: Protecting Sensitive Neural Network Models in Trusted Execution Environments When Attackers have Pre-Trained ModelsACM Transactions on Software Engineering and Methodology (TOSEM), 2024
Ding Li
Ziqi Zhang
Mengyu Yao
Y. Cai
Yao Guo
Xiangqun Chen
FedML
364
12
0
15 Nov 2024
PEAS: A Strategy for Crafting Transferable Adversarial Examples
PEAS: A Strategy for Crafting Transferable Adversarial Examples
Bar Avraham
Yisroel Mirsky
AAML
332
0
0
20 Oct 2024
Non-transferable Pruning
Non-transferable PruningEuropean Conference on Computer Vision (ECCV), 2024
Ruyi Ding
Lili Su
A. A. Ding
Yunsi Fei
AAML
233
5
0
10 Oct 2024
The Early Bird Catches the Leak: Unveiling Timing Side Channels in LLM Serving Systems
The Early Bird Catches the Leak: Unveiling Timing Side Channels in LLM Serving SystemsIEEE Transactions on Information Forensics and Security (IEEE TIFS), 2024
Linke Song
Zixuan Pang
Wenhao Wang
Zihao Wang
XiaoFeng Wang
H. G. Chen
Wei Song
Yier Jin
Dan Meng
Rui Hou
710
29
0
30 Sep 2024
Confidential Computing on Heterogeneous CPU-GPU Systems: Survey and Future Directions
Confidential Computing on Heterogeneous CPU-GPU Systems: Survey and Future Directions
Qifan Wang
David Oswald
312
0
0
21 Aug 2024
Beyond Slow Signs in High-fidelity Model Extraction
Beyond Slow Signs in High-fidelity Model ExtractionNeural Information Processing Systems (NeurIPS), 2024
Hanna Foerster
Robert D. Mullins
Ilia Shumailov
Jamie Hayes
AAML
404
12
0
14 Jun 2024
Amalgam: A Framework for Obfuscated Neural Network Training on the Cloud
Amalgam: A Framework for Obfuscated Neural Network Training on the Cloud
Sifat Ut Taki
Spyridon Mastorakis
FedML
332
1
0
02 Jun 2024
DLoRA: Distributed Parameter-Efficient Fine-Tuning Solution for Large
  Language Model
DLoRA: Distributed Parameter-Efficient Fine-Tuning Solution for Large Language Model
Chao Gao
Sai Qian Zhang
ALM
444
10
0
08 Apr 2024
Testing autonomous vehicles and AI: perspectives and challenges from
  cybersecurity, transparency, robustness and fairness
Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness
David Fernández Llorca
Ronan Hamon
Henrik Junklewitz
Kathrin Grosse
Lars Kunze
...
Nick Reed
Alexandre Alahi
Emilia Gómez
Ignacio E. Sánchez
Á. Kriston
350
17
0
21 Feb 2024
Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey
  and the Open Libraries Behind Them
Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them
Chao-Jung Liu
Boxi Chen
Wei Shao
Chris Zhang
Kelvin Wong
Yi Zhang
422
7
0
22 Jan 2024
No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN
  Partition for On-Device ML
No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN Partition for On-Device ML
Ziqi Zhang
Chen Gong
Yifeng Cai
Yuanyuan Yuan
Bingyan Liu
Ding Li
Yao Guo
Xiangqun Chen
FedML
224
52
0
11 Oct 2023
Beyond Labeling Oracles: What does it mean to steal ML models?
Beyond Labeling Oracles: What does it mean to steal ML models?
Avital Shafran
Ilia Shumailov
Murat A. Erdogdu
Nicolas Papernot
AAML
428
5
0
03 Oct 2023
DeepTheft: Stealing DNN Model Architectures through Power Side Channel
DeepTheft: Stealing DNN Model Architectures through Power Side ChannelIEEE Symposium on Security and Privacy (IEEE S&P), 2023
Yansong Gao
Huming Qiu
Zhi-Li Zhang
Binghui Wang
Hua Ma
A. Abuadbba
Minhui Xue
Anmin Fu
Surya Nepal
MLAUFedML
219
37
0
21 Sep 2023
Compilation as a Defense: Enhancing DL Model Attack Robustness via
  Tensor Optimization
Compilation as a Defense: Enhancing DL Model Attack Robustness via Tensor Optimization
Stefan Trawicki
William Hackett
Lewis Birch
M. Dascalu
Peter Garraghan
AAML
184
1
0
20 Sep 2023
DREAM: Domain-free Reverse Engineering Attributes of Black-box Model
DREAM: Domain-free Reverse Engineering Attributes of Black-box Model
Rongqing Li
Jiaqi Yu
Changsheng Li
Tong Lu
Ye Yuan
Guoren Wang
MLAU
205
0
0
20 Jul 2023
EZClone: Improving DNN Model Extraction Attack via Shape Distillation
  from GPU Execution Profiles
EZClone: Improving DNN Model Extraction Attack via Shape Distillation from GPU Execution Profiles
Jonah O'Brien Weiss
Tiago A. O. Alves
S. Kundu
MIACVAAMLFedML
243
9
0
06 Apr 2023
Rethinking White-Box Watermarks on Deep Learning Models under Neural
  Structural Obfuscation
Rethinking White-Box Watermarks on Deep Learning Models under Neural Structural ObfuscationUSENIX Security Symposium (USENIX Security), 2023
Yifan Yan
Xudong Pan
Mi Zhang
Min Yang
AAML
425
35
0
17 Mar 2023
Digital Privacy Under Attack: Challenges and Enablers
Digital Privacy Under Attack: Challenges and EnablersACM Computing Surveys (ACM Comput. Surv.), 2023
Baobao Song
Mengyue Deng
Mengyue Deng
Qiujun Lan
R. Doss
Gang Li
AAML
455
5
0
18 Feb 2023
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
425
116
0
29 Dec 2022
Decompiling x86 Deep Neural Network Executables
Decompiling x86 Deep Neural Network ExecutablesUSENIX Security Symposium (USENIX Security), 2022
Zhibo Liu
Yuanyuan Yuan
Shuai Wang
Xiaofei Xie
Lei Ma
AAML
378
29
0
03 Oct 2022
PINCH: An Adversarial Extraction Attack Framework for Deep Learning
  Models
PINCH: An Adversarial Extraction Attack Framework for Deep Learning Models
William Hackett
Stefan Trawicki
Zhengxin Yu
N. Suri
Peter Garraghan
MIACVAAML
254
3
0
13 Sep 2022
Demystifying Arch-hints for Model Extraction: An Attack in Unified
  Memory System
Demystifying Arch-hints for Model Extraction: An Attack in Unified Memory System
Zhendong Wang
Xiaoming Zeng
Xulong Tang
Qiang Yan
Xingbo Hu
Yang Hu
AAMLMIACVFedML
194
7
0
29 Aug 2022
TrojViT: Trojan Insertion in Vision Transformers
TrojViT: Trojan Insertion in Vision TransformersComputer Vision and Pattern Recognition (CVPR), 2022
Mengxin Zheng
Qian Lou
Lei Jiang
548
72
0
27 Aug 2022
ObfuNAS: A Neural Architecture Search-based DNN Obfuscation Approach
ObfuNAS: A Neural Architecture Search-based DNN Obfuscation Approach
Tong Zhou
Shaolei Ren
Xiaolin Xu
AAML
341
18
0
17 Aug 2022
High-Level Approaches to Hardware Security: A Tutorial
High-Level Approaches to Hardware Security: A TutorialACM Transactions on Embedded Computing Systems (TECS), 2022
Hammond Pearce
Ramesh Karri
Benjamin Tan
294
15
0
21 Jul 2022
Revealing Secrets From Pre-trained Models
Revealing Secrets From Pre-trained Models
Mujahid Al Rafi
Yuan Feng
Hyeran Jeon
218
0
0
19 Jul 2022
I Know What You Trained Last Summer: A Survey on Stealing Machine
  Learning Models and Defences
I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and DefencesACM Computing Surveys (ACM CSUR), 2022
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
397
167
0
16 Jun 2022
Learning to Reverse DNNs from AI Programs Automatically
Learning to Reverse DNNs from AI Programs Automatically
Simin Chen
Hamed Khanpour
Cong Liu
Wei Yang
247
19
0
20 May 2022
Fingerprinting Deep Neural Networks Globally via Universal Adversarial
  Perturbations
Fingerprinting Deep Neural Networks Globally via Universal Adversarial PerturbationsComputer Vision and Pattern Recognition (CVPR), 2022
Zirui Peng
Shaofeng Li
Guoxing Chen
Cheng Zhang
Haojin Zhu
Minhui Xue
AAMLFedML
295
104
0
17 Feb 2022
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised LearningConference on Computer and Communications Security (CCS), 2022
Yupei Liu
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
MIACV
315
35
0
15 Jan 2022
Security for Machine Learning-based Software Systems: a survey of
  threats, practices and challenges
Security for Machine Learning-based Software Systems: a survey of threats, practices and challengesACM Computing Surveys (ACM CSUR), 2022
Huaming Chen
Muhammad Ali Babar
AAML
364
41
0
12 Jan 2022
BDFA: A Blind Data Adversarial Bit-flip Attack on Deep Neural Networks
BDFA: A Blind Data Adversarial Bit-flip Attack on Deep Neural Networks
B. Ghavami
Mani Sadati
M. Shahidzadeh
Zhenman Fang
Lesley Shannon
AAML
327
4
0
07 Dec 2021
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight
  Stealing in Memories
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories
Adnan Siraj Rakin
Md Hafizul Islam Chowdhuryy
Fan Yao
Deliang Fan
AAMLMIACV
247
160
0
08 Nov 2021
Confidential Machine Learning Computation in Untrusted Environments: A
  Systems Security Perspective
Confidential Machine Learning Computation in Untrusted Environments: A Systems Security PerspectiveIEEE Access (IEEE Access), 2021
Kha Dinh Duy
Taehyun Noh
Siwon Huh
Hojoon Lee
367
12
0
05 Nov 2021
Fingerprinting Multi-exit Deep Neural Network Models via Inference Time
Fingerprinting Multi-exit Deep Neural Network Models via Inference Time
Tian Dong
Han Qiu
Tianwei Zhang
Jiwei Li
Hewu Li
Jialiang Lu
AAML
200
8
0
07 Oct 2021
First to Possess His Statistics: Data-Free Model Extraction Attack on
  Tabular Data
First to Possess His Statistics: Data-Free Model Extraction Attack on Tabular Data
Masataka Tasumi
Kazuki Iwahana
Naoto Yanai
Katsunari Shishido
Toshiya Shimizu
Yuji Higuchi
I. Morikawa
Jun Yajima
AAML
185
4
0
30 Sep 2021
SoK: Machine Learning Governance
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
348
21
0
20 Sep 2021
HODA: Hardness-Oriented Detection of Model Extraction Attacks
HODA: Hardness-Oriented Detection of Model Extraction AttacksIEEE Transactions on Information Forensics and Security (IEEE TIFS), 2021
A. M. Sadeghzadeh
Amir Mohammad Sobhanian
F. Dehghan
R. Jalili
MIACV
198
12
0
21 Jun 2021
Delving into Data: Effectively Substitute Training for Black-box Attack
Delving into Data: Effectively Substitute Training for Black-box AttackComputer Vision and Pattern Recognition (CVPR), 2021
Wenxuan Wang
Bangjie Yin
Taiping Yao
Li Zhang
Yanwei Fu
Shouhong Ding
Jilin Li
Feiyue Huang
Xiangyang Xue
AAML
262
72
0
26 Apr 2021
1
Page 1 of 1