ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.05768
  4. Cited By
Hardware Trojan Attacks on Neural Networks

Hardware Trojan Attacks on Neural Networks

14 June 2018
Joseph Clements
Yingjie Lao
    AAML
ArXivPDFHTML

Papers citing "Hardware Trojan Attacks on Neural Networks"

13 / 13 papers shown
Title
Evil from Within: Machine Learning Backdoors through Hardware Trojans
Evil from Within: Machine Learning Backdoors through Hardware Trojans
Alexander Warnecke
Julian Speith
Janka Möller
Konrad Rieck
C. Paar
AAML
26
3
0
17 Apr 2023
Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping
Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping
B. Ghavami
Seyd Movi
Zhenman Fang
Lesley Shannon
AAML
40
9
0
25 Dec 2021
Attacking Deep Learning AI Hardware with Universal Adversarial
  Perturbation
Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation
Mehdi Sadi
B. M. S. Bahar Talukder
Kaniz Mishty
Md. Tauhidur Rahman
AAML
37
0
0
18 Nov 2021
Robust Machine Learning Systems: Challenges, Current Trends,
  Perspectives, and the Road Ahead
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Mohamed Bennai
Mahum Naseer
T. Theocharides
C. Kyrkou
O. Mutlu
Lois Orosa
Jungwook Choi
OOD
81
100
0
04 Jan 2021
Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory
  Architectures to Adversarial Attacks in Deep Neural Networks
Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory Architectures to Adversarial Attacks in Deep Neural Networks
Abhishek Moitra
Priyadarshini Panda
AAML
27
2
0
26 Nov 2020
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition
  Systems
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems
Haoliang Li
Yufei Wang
Xiaofei Xie
Yang Liu
Shiqi Wang
Renjie Wan
Lap-Pui Chau
City University of Hong Kong
AAML
24
32
0
15 Sep 2020
Blackbox Trojanising of Deep Learning Models : Using non-intrusive
  network structure and binary alterations
Blackbox Trojanising of Deep Learning Models : Using non-intrusive network structure and binary alterations
Jonathan Pan
AAML
14
3
0
02 Aug 2020
Backdoor Attacks to Graph Neural Networks
Backdoor Attacks to Graph Neural Networks
Zaixi Zhang
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
GNN
24
213
0
19 Jun 2020
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural
  Networks
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks
Junfeng Guo
Zelun Kong
Cong Liu
AAML
32
1
0
24 Mar 2020
Detecting AI Trojans Using Meta Neural Analysis
Detecting AI Trojans Using Meta Neural Analysis
Xiaojun Xu
Qi Wang
Huichen Li
Nikita Borisov
Carl A. Gunter
Bo-wen Li
43
321
0
08 Oct 2019
Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural
  Networks Under Hardware Fault Attacks
Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks
Sanghyun Hong
Pietro Frigo
Yigitcan Kaya
Cristiano Giuffrida
Tudor Dumitras
AAML
22
211
0
03 Jun 2019
Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search
Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search
Adnan Siraj Rakin
Zhezhi He
Deliang Fan
AAML
21
219
0
28 Mar 2019
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
182
289
0
02 Dec 2018
1