ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.03006
  4. Cited By
Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush
  Deep Neural Network in Multi-Tenant FPGA
v1v2 (latest)

Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA

5 November 2020
Adnan Siraj Rakin
Yukui Luo
Xiaolin Xu
Deliang Fan
    AAML
ArXiv (abs)PDFHTML

Papers citing "Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA"

12 / 12 papers shown
Verification of Bit-Flip Attacks against Quantized Neural Networks
Verification of Bit-Flip Attacks against Quantized Neural Networks
Yedi Zhang
Lei Huang
Pengfei Gao
Fu Song
Jun Sun
Jin Song Dong
AAML
280
4
0
22 Feb 2025
Data Duplication: A Novel Multi-Purpose Attack Paradigm in Machine Unlearning
Data Duplication: A Novel Multi-Purpose Attack Paradigm in Machine Unlearning
Dayong Ye
Tainqing Zhu
Junlong Li
Kun Gao
B. Liu
Guang Dai
Wanlei Zhou
Yanmei Zhang
AAMLMU
447
9
0
28 Jan 2025
Threshold Breaker: Can Counter-Based RowHammer Prevention Mechanisms
  Truly Safeguard DRAM?
Threshold Breaker: Can Counter-Based RowHammer Prevention Mechanisms Truly Safeguard DRAM?
Ranyang Zhou
Jacqueline T. Liu
Sabbir Ahmed
Nakul Kochar
Adnan Siraj Rakin
Shaahin Angizi
194
6
0
28 Nov 2023
One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training
One-bit Flip is All You Need: When Bit-flip Attack Meets Model TrainingIEEE International Conference on Computer Vision (ICCV), 2023
Jianshuo Dong
Han Qiu
Yiming Li
Tianwei Zhang
Yuan-Fang Li
Zeqi Lai
Chao Zhang
Shutao Xia
AAML
164
31
0
12 Aug 2023
NNSplitter: An Active Defense Solution for DNN Model via Automated
  Weight Obfuscation
NNSplitter: An Active Defense Solution for DNN Model via Automated Weight ObfuscationInternational Conference on Machine Learning (ICML), 2023
Tong Zhou
Yukui Luo
Shaolei Ren
Xiaolin Xu
AAML
430
30
0
28 Apr 2023
Pentimento: Data Remanence in Cloud FPGAs
Pentimento: Data Remanence in Cloud FPGAsInternational Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2023
Colin Drewes
Olivia Weng
Andres Meza
Alric Althoff
David Kohlbrenner
Ryan Kastner
D. Richmond
174
6
0
31 Mar 2023
Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks
Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural NetworksUSENIX Security Symposium (USENIX Security), 2023
Jialai Wang
Ziyuan Zhang
Meiqi Wang
Han Qiu
Tianwei Zhang
Qi Li
Zongpeng Li
Tao Wei
Chao Zhang
AAML
289
46
0
27 Feb 2023
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
415
115
0
29 Dec 2022
Logic and Reduction Operation based Hardware Trojans in Digital Design
Logic and Reduction Operation based Hardware Trojans in Digital DesignInternational SoC Design Conference (SD), 2022
Mayukhmali Das
Sounak Dutta
S. Chatterjee
116
0
0
09 Sep 2022
NNReArch: A Tensor Program Scheduling Framework Against Neural Network
  Architecture Reverse Engineering
NNReArch: A Tensor Program Scheduling Framework Against Neural Network Architecture Reverse EngineeringIEEE Symposium on Field-Programmable Custom Computing Machines (FCCM), 2022
Yukui Luo
Shijin Duan
Gongye Cheng
Yunsi Fei
Xiaolin Xu
138
11
0
22 Mar 2022
DeepStrike: Remotely-Guided Fault Injection Attacks on DNN Accelerator
  in Cloud-FPGA
DeepStrike: Remotely-Guided Fault Injection Attacks on DNN Accelerator in Cloud-FPGADesign Automation Conference (DAC), 2021
Yukui Luo
Cheng Gongye
Yunsi Fei
Xiaolin Xu
198
47
0
20 May 2021
Neighbors From Hell: Voltage Attacks Against Deep Learning Accelerators
  on Multi-Tenant FPGAs
Neighbors From Hell: Voltage Attacks Against Deep Learning Accelerators on Multi-Tenant FPGAsInternational Conference on Field-Programmable Technology (ICFPT), 2020
Andrew Boutros
Mathew Hall
Nicolas Papernot
Vaughn Betz
231
52
0
14 Dec 2020
1
Page 1 of 1