ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.12784
  4. Cited By
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy

Hermes Attack: Steal DNN Models with Lossless Inference Accuracy

23 June 2020
Yuankun Zhu
Yueqiang Cheng
Husheng Zhou
Yantao Lu
    MIACV
    AAML
ArXivPDFHTML

Papers citing "Hermes Attack: Steal DNN Models with Lossless Inference Accuracy"

17 / 17 papers shown
Title
The Early Bird Catches the Leak: Unveiling Timing Side Channels in LLM Serving Systems
The Early Bird Catches the Leak: Unveiling Timing Side Channels in LLM Serving Systems
Linke Song
Zixuan Pang
Wenhao Wang
Zihao Wang
XiaoFeng Wang
Hongbo Chen
Wei Song
Yier Jin
Dan Meng
Rui Hou
48
7
0
30 Sep 2024
Digital Privacy Under Attack: Challenges and Enablers
Digital Privacy Under Attack: Challenges and Enablers
Baobao Song
Mengyue Deng
Shiva Raj Pokhrel
Qiujun Lan
R. Doss
Gang Li
AAML
26
3
0
18 Feb 2023
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
27
75
0
29 Dec 2022
Decompiling x86 Deep Neural Network Executables
Decompiling x86 Deep Neural Network Executables
Zhibo Liu
Yuanyuan Yuan
Shuai Wang
Xiaofei Xie
L. Ma
AAML
29
13
0
03 Oct 2022
ObfuNAS: A Neural Architecture Search-based DNN Obfuscation Approach
ObfuNAS: A Neural Architecture Search-based DNN Obfuscation Approach
Tong Zhou
Shaolei Ren
Xiaolin Xu
AAML
22
13
0
17 Aug 2022
I Know What You Trained Last Summer: A Survey on Stealing Machine
  Learning Models and Defences
I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
29
106
0
16 Jun 2022
Learning to Reverse DNNs from AI Programs Automatically
Learning to Reverse DNNs from AI Programs Automatically
Simin Chen
Hamed Khanpour
Cong Liu
Wei Yang
35
15
0
20 May 2022
Fingerprinting Deep Neural Networks Globally via Universal Adversarial
  Perturbations
Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations
Zirui Peng
Shaofeng Li
Guoxing Chen
Cheng Zhang
Haojin Zhu
Minhui Xue
AAML
FedML
23
66
0
17 Feb 2022
Security for Machine Learning-based Software Systems: a survey of
  threats, practices and challenges
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges
Huaming Chen
Muhammad Ali Babar
AAML
29
21
0
12 Jan 2022
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight
  Stealing in Memories
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories
Adnan Siraj Rakin
Md Hafizul Islam Chowdhuryy
Fan Yao
Deliang Fan
AAML
MIACV
20
110
0
08 Nov 2021
Confidential Machine Learning Computation in Untrusted Environments: A
  Systems Security Perspective
Confidential Machine Learning Computation in Untrusted Environments: A Systems Security Perspective
Kha Dinh Duy
Taehyun Noh
Siwon Huh
Hojoon Lee
54
9
0
05 Nov 2021
Fingerprinting Multi-exit Deep Neural Network Models via Inference Time
Fingerprinting Multi-exit Deep Neural Network Models via Inference Time
Tian Dong
Han Qiu
Tianwei Zhang
Jiwei Li
Hewu Li
Jialiang Lu
AAML
26
8
0
07 Oct 2021
First to Possess His Statistics: Data-Free Model Extraction Attack on
  Tabular Data
First to Possess His Statistics: Data-Free Model Extraction Attack on Tabular Data
Masataka Tasumi
Kazuki Iwahana
Naoto Yanai
Katsunari Shishido
Toshiya Shimizu
Yuji Higuchi
I. Morikawa
Jun Yajima
AAML
26
4
0
30 Sep 2021
SoK: Machine Learning Governance
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
30
16
0
20 Sep 2021
HODA: Hardness-Oriented Detection of Model Extraction Attacks
HODA: Hardness-Oriented Detection of Model Extraction Attacks
A. M. Sadeghzadeh
Amir Mohammad Sobhanian
F. Dehghan
R. Jalili
MIACV
11
7
0
21 Jun 2021
Delving into Data: Effectively Substitute Training for Black-box Attack
Delving into Data: Effectively Substitute Training for Black-box Attack
Wenxuan Wang
Bangjie Yin
Taiping Yao
Li Zhang
Yanwei Fu
Shouhong Ding
Jilin Li
Feiyue Huang
Xiangyang Xue
AAML
60
63
0
26 Apr 2021
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
254
5,833
0
08 Jul 2016
1