ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.04761
  4. Cited By
Cache Telepathy: Leveraging Shared Resource Attacks to Learn DNN
  Architectures

Cache Telepathy: Leveraging Shared Resource Attacks to Learn DNN Architectures

14 August 2018
Mengjia Yan
Christopher W. Fletcher
Josep Torrellas
    MIACV
    FedML
ArXivPDFHTML

Papers citing "Cache Telepathy: Leveraging Shared Resource Attacks to Learn DNN Architectures"

50 / 100 papers shown
Title
MixNN: A design for protecting deep learning models
MixNN: A design for protecting deep learning models
Chao Liu
Hao Chen
Yusen Wu
Rui Jin
10
0
0
28 Mar 2022
CacheFX: A Framework for Evaluating Cache Security
CacheFX: A Framework for Evaluating Cache Security
Daniel Genkin
William Kosasih
Fangfei Liu
Anna Trikalinou
Thomas Unterluggauer
Y. Yarom
27
17
0
27 Jan 2022
pvCNN: Privacy-Preserving and Verifiable Convolutional Neural Network
  Testing
pvCNN: Privacy-Preserving and Verifiable Convolutional Neural Network Testing
Jiasi Weng
Jian Weng
Gui Tang
Anjia Yang
Ming Li
Jia-Nan Liu
21
31
0
23 Jan 2022
MetaV: A Meta-Verifier Approach to Task-Agnostic Model Fingerprinting
MetaV: A Meta-Verifier Approach to Task-Agnostic Model Fingerprinting
Xudong Pan
Yifan Yan
Mi Zhang
Min Yang
19
23
0
19 Jan 2022
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning
Yupei Liu
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
MIACV
8
25
0
15 Jan 2022
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight
  Stealing in Memories
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories
Adnan Siraj Rakin
Md Hafizul Islam Chowdhuryy
Fan Yao
Deliang Fan
AAML
MIACV
42
110
0
08 Nov 2021
Confidential Machine Learning Computation in Untrusted Environments: A
  Systems Security Perspective
Confidential Machine Learning Computation in Untrusted Environments: A Systems Security Perspective
Kha Dinh Duy
Taehyun Noh
Siwon Huh
Hojoon Lee
56
9
0
05 Nov 2021
10 Security and Privacy Problems in Large Foundation Models
10 Security and Privacy Problems in Large Foundation Models
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
11
7
0
28 Oct 2021
Physical Side-Channel Attacks on Embedded Neural Networks: A Survey
Physical Side-Channel Attacks on Embedded Neural Networks: A Survey
M. M. Real
Ruben Salvador
AAML
23
30
0
21 Oct 2021
Rosita++: Automatic Higher-Order Leakage Elimination from Cryptographic
  Code
Rosita++: Automatic Higher-Order Leakage Elimination from Cryptographic Code
Madura A Shelton
L. Chmielewski
Niels Samwel
Markus Wagner
L. Batina
Y. Yarom
21
20
0
24 Sep 2021
Can one hear the shape of a neural network?: Snooping the GPU via
  Magnetic Side Channel
Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel
H. Maia
Chang Xiao
Dingzeyu Li
E. Grinspun
Changxi Zheng
AAML
37
27
0
15 Sep 2021
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks
Guarding Machine Learning Hardware Against Physical Side-Channel Attacks
Anuj Dubey
Rosario Cammarota
Vikram B. Suresh
Aydin Aysu
AAML
30
31
0
01 Sep 2021
Power-Based Attacks on Spatial DNN Accelerators
Power-Based Attacks on Spatial DNN Accelerators
Ge Li
Mohit Tiwari
Michael Orshansky
22
8
0
28 Aug 2021
MEGEX: Data-Free Model Extraction Attack against Gradient-Based
  Explainable AI
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI
T. Miura
Satoshi Hasegawa
Toshiki Shibahara
SILM
MIACV
16
37
0
19 Jul 2021
HODA: Hardness-Oriented Detection of Model Extraction Attacks
HODA: Hardness-Oriented Detection of Model Extraction Attacks
A. M. Sadeghzadeh
Amir Mohammad Sobhanian
F. Dehghan
R. Jalili
MIACV
17
7
0
21 Jun 2021
Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against
  Image Translation Models
Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Models
S. Szyller
Vasisht Duddu
Tommi Gröndahl
Nirmal Asokan
MIACV
12
2
0
26 Apr 2021
Prime+Probe 1, JavaScript 0: Overcoming Browser-based Side-Channel
  Defenses
Prime+Probe 1, JavaScript 0: Overcoming Browser-based Side-Channel Defenses
A. Shusterman
Ayush Agarwal
Sioli O'Connell
Daniel Genkin
Yossef Oren
Y. Yarom
18
65
0
08 Mar 2021
Ownership Verification of DNN Architectures via Hardware Cache Side
  Channels
Ownership Verification of DNN Architectures via Hardware Cache Side Channels
Xiaoxuan Lou
Shangwei Guo
Jiwei Li
Tianwei Zhang
13
11
0
06 Feb 2021
MIRAGE: Mitigating Conflict-Based Cache Attacks with a Practical
  Fully-Associative Design
MIRAGE: Mitigating Conflict-Based Cache Attacks with a Practical Fully-Associative Design
Gururaj Saileshwar
Moinuddin K. Qureshi
14
83
0
18 Sep 2020
Artificial Neural Networks and Fault Injection Attacks
Artificial Neural Networks and Fault Injection Attacks
Shahin Tajik
F. Ganji
SILM
13
10
0
17 Aug 2020
Trustworthy AI Inference Systems: An Industry Research View
Trustworthy AI Inference Systems: An Industry Research View
Rosario Cammarota
M. Schunter
Anand Rajan
Fabian Boemer
Ágnes Kiss
...
Aydin Aysu
Fateme S. Hosseini
Chengmo Yang
Eric Wallace
Pam Norton
17
14
0
10 Aug 2020
DeepPeep: Exploiting Design Ramifications to Decipher the Architecture
  of Compact DNNs
DeepPeep: Exploiting Design Ramifications to Decipher the Architecture of Compact DNNs
N. Jha
Sparsh Mittal
Binod Kumar
Govardhan Mattela
AAML
18
12
0
30 Jul 2020
T-BFA: Targeted Bit-Flip Adversarial Weight Attack
T-BFA: Targeted Bit-Flip Adversarial Weight Attack
Adnan Siraj Rakin
Zhezhi He
Jingtao Li
Fan Yao
C. Chakrabarti
Deliang Fan
AAML
14
13
0
24 Jul 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
220
0
21 Jul 2020
Database Reconstruction from Noisy Volumes: A Cache Side-Channel Attack
  on SQLite
Database Reconstruction from Noisy Volumes: A Cache Side-Channel Attack on SQLite
Aria Shahverdi
M. Shirinov
Dana Dachman-Soled
AAML
18
16
0
26 Jun 2020
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
Yuankun Zhu
Yueqiang Cheng
Husheng Zhou
Yantao Lu
MIACV
AAML
31
99
0
23 Jun 2020
De-Anonymizing Text by Fingerprinting Language Generation
De-Anonymizing Text by Fingerprinting Language Generation
Zhen Sun
R. Schuster
Vitaly Shmatikov
21
6
0
17 Jun 2020
BoMaNet: Boolean Masking of an Entire Neural Network
BoMaNet: Boolean Masking of an Entire Neural Network
Anuj Dubey
Rosario Cammarota
Aydin Aysu
AAML
9
44
0
16 Jun 2020
SPEED: Secure, PrivatE, and Efficient Deep learning
SPEED: Secure, PrivatE, and Efficient Deep learning
Arnaud Grivet Sébert
Rafael Pinot
Martin Zuber
Cédric Gouy-Pailler
Renaud Sirdey
FedML
15
20
0
16 Jun 2020
Stealing Deep Reinforcement Learning Models for Fun and Profit
Stealing Deep Reinforcement Learning Models for Fun and Profit
Kangjie Chen
Shangwei Guo
Tianwei Zhang
Xiaofei Xie
Yang Liu
MLAU
MIACV
OffRL
24
45
0
09 Jun 2020
A Protection against the Extraction of Neural Network Models
A Protection against the Extraction of Neural Network Models
H. Chabanne
Vincent Despiegel
Linda Guiga
FedML
19
5
0
26 May 2020
Revisiting Membership Inference Under Realistic Assumptions
Revisiting Membership Inference Under Realistic Assumptions
Bargav Jayaraman
Lingxiao Wang
Katherine Knipmeyer
Quanquan Gu
David E. Evans
16
147
0
21 May 2020
Privacy in Deep Learning: A Survey
Privacy in Deep Learning: A Survey
Fatemehsadat Mirshghallah
Mohammadkazem Taram
Praneeth Vepakomma
Abhishek Singh
Ramesh Raskar
H. Esmaeilzadeh
FedML
11
135
0
25 Apr 2020
MGX: Near-Zero Overhead Memory Protection for Data-Intensive
  Accelerators
MGX: Near-Zero Overhead Memory Protection for Data-Intensive Accelerators
Weizhe Hua
M. Umar
Zhiru Zhang
G. E. Suh
GNN
36
19
0
20 Apr 2020
DeepHammer: Depleting the Intelligence of Deep Neural Networks through
  Targeted Chain of Bit Flips
DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips
Fan Yao
Adnan Siraj Rakin
Deliang Fan
AAML
18
154
0
30 Mar 2020
How to 0wn NAS in Your Spare Time
How to 0wn NAS in Your Spare Time
Sanghyun Hong
Michael Davinroy
Yigitcan Kaya
Dana Dachman-Soled
Tudor Dumitras
25
36
0
17 Feb 2020
Ten AI Stepping Stones for Cybersecurity
Ten AI Stepping Stones for Cybersecurity
Ricardo Morla
19
3
0
14 Dec 2019
Survey of Attacks and Defenses on Edge-Deployed Neural Networks
Survey of Attacks and Defenses on Edge-Deployed Neural Networks
Mihailo Isakov
V. Gadepally
K. Gettings
Michel A. Kinsy
AAML
14
31
0
27 Nov 2019
SpecuSym: Speculative Symbolic Execution for Cache Timing Leak Detection
SpecuSym: Speculative Symbolic Execution for Cache Timing Leak Detection
Shengjian Guo
Yueqi Chen
Peng Li
Yueqiang Cheng
Huibo Wang
Meng Wu
Zhiqiang Zuo
18
48
0
04 Nov 2019
Quantifying (Hyper) Parameter Leakage in Machine Learning
Quantifying (Hyper) Parameter Leakage in Machine Learning
Vasisht Duddu
D. V. Rao
AAML
MIACV
FedML
28
5
0
31 Oct 2019
MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel
  Protection
MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection
Anuj Dubey
Rosario Cammarota
Aydin Aysu
AAML
16
78
0
29 Oct 2019
IPGuard: Protecting Intellectual Property of Deep Neural Networks via
  Fingerprinting the Classification Boundary
IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
11
106
0
28 Oct 2019
A framework for the extraction of Deep Neural Networks by leveraging
  public data
A framework for the extraction of Deep Neural Networks by leveraging public data
Soham Pal
Yash Gupta
Aditya Shukla
Aditya Kanade
S. Shevade
V. Ganapathy
FedML
MLAU
MIACV
30
56
0
22 May 2019
IRONHIDE: A Secure Multicore that Efficiently Mitigates
  Microarchitecture State Attacks for Interactive Applications
IRONHIDE: A Secure Multicore that Efficiently Mitigates Microarchitecture State Attacks for Interactive Applications
H. Omar
O. Khan
11
25
0
29 Apr 2019
Neural Network Model Extraction Attacks in Edge Devices by Hearing
  Architectural Hints
Neural Network Model Extraction Attacks in Edge Devices by Hearing Architectural Hints
Xing Hu
Ling Liang
Lei Deng
Shuangchen Li
Xinfeng Xie
Yu Ji
Yufei Ding
Chang Liu
T. Sherwood
Yuan Xie
AAML
MLAU
21
36
0
10 Mar 2019
Evaluating Differentially Private Machine Learning in Practice
Evaluating Differentially Private Machine Learning in Practice
Bargav Jayaraman
David E. Evans
15
7
0
24 Feb 2019
Stealing Neural Networks via Timing Side Channels
Stealing Neural Networks via Timing Side Channels
Vasisht Duddu
D. Samanta
D. V. Rao
V. Balas
AAML
MLAU
FedML
25
133
0
31 Dec 2018
How Secure are Deep Learning Algorithms from Side-Channel based Reverse
  Engineering?
How Secure are Deep Learning Algorithms from Side-Channel based Reverse Engineering?
Manaar Alam
Debdeep Mukhopadhyay
FedML
MIACV
11
24
0
13 Nov 2018
Security Analysis of Deep Neural Networks Operating in the Presence of
  Cache Side-Channel Attacks
Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks
Sanghyun Hong
Michael Davinroy
Yigitcan Kaya
S. Locke
Ian Rackow
Kevin Kulda
Dana Dachman-Soled
Tudor Dumitras
MIACV
25
90
0
08 Oct 2018
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,743
0
26 Sep 2016
Previous
12