ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.05193
  4. Cited By
TBT: Targeted Neural Network Attack with Bit Trojan
v1v2v3 (latest)

TBT: Targeted Neural Network Attack with Bit Trojan

Computer Vision and Pattern Recognition (CVPR), 2019
10 September 2019
Adnan Siraj Rakin
Zhezhi He
Deliang Fan
    AAML
ArXiv (abs)PDFHTML

Papers citing "TBT: Targeted Neural Network Attack with Bit Trojan"

50 / 113 papers shown
Title
Rounding-Guided Backdoor Injection in Deep Learning Model Quantization
Rounding-Guided Backdoor Injection in Deep Learning Model Quantization
Xiangxiang Chen
Peixin Zhang
Jun Sun
Wenhai Wang
Jingyi Wang
AAML
106
0
0
05 Oct 2025
SBFA: Single Sneaky Bit Flip Attack to Break Large Language Models
SBFA: Single Sneaky Bit Flip Attack to Break Large Language Models
Jingkai Guo
C. Chakrabarti
Deliang Fan
AAML
44
3
0
26 Sep 2025
ObfusBFA: A Holistic Approach to Safeguarding DNNs from Different Types of Bit-Flip Attacks
ObfusBFA: A Holistic Approach to Safeguarding DNNs from Different Types of Bit-Flip Attacks
Xiaobei Yan
Han Qiu
Minlie Huang
AAML
236
0
0
12 Jun 2025
GaussTrap: Stealthy Poisoning Attacks on 3D Gaussian Splatting for Targeted Scene Confusion
GaussTrap: Stealthy Poisoning Attacks on 3D Gaussian Splatting for Targeted Scene Confusion
Jiaxin Hong
Sixu Chen
Shuoyang Sun
Hongyao Yu
Hao Fang
Yuqi Tan
Bin Chen
Shuhan Qi
Jiawei Li
3DGSAAML
882
0
0
29 Apr 2025
Robo-Troj: Attacking LLM-based Task Planners
Robo-Troj: Attacking LLM-based Task Planners
Mohaiminul Al Nahian
Zainab Altaweel
David Reitano
Sabbir Ahmed
Saumitra Lohokare
Shiqi Zhang
AAML
356
1
0
23 Apr 2025
Hessian-aware Training for Enhancing DNNs Resilience to Parameter Corruptions
Hessian-aware Training for Enhancing DNNs Resilience to Parameter Corruptions
Tahmid Hasan Prato
Seijoon Kim
Lizhong Chen
Sanghyun Hong
AAML
289
1
0
02 Apr 2025
Seal Your Backdoor with Variational Defense
Seal Your Backdoor with Variational Defense
Ivan Sabolić
Matej Grcić
Sinisa Segvic
AAML
1.1K
1
0
11 Mar 2025
Stealthy Backdoor Attack to Real-world Models in Android Apps
Jiali Wei
Ming Fan
Xicheng Zhang
Wenjing Jiao
Jian Shu
Ting Liu
AAML
221
0
0
03 Jan 2025
PrisonBreak: Jailbreaking Large Language Models with at Most Twenty-Five Targeted Bit-flips
PrisonBreak: Jailbreaking Large Language Models with at Most Twenty-Five Targeted Bit-flips
Zachary Coalson
Jeonghyun Woo
Shiyang Chen
Yu Sun
Yu Sun
...
Lishan Yang
Gururaj Saileshwar
Prashant J. Nair
Bo Fang
Sanghyun Hong
AAML
489
8
0
10 Dec 2024
Data Free Backdoor Attacks
Data Free Backdoor AttacksNeural Information Processing Systems (NeurIPS), 2024
Bochuan Cao
Jinyuan Jia
Chuxuan Hu
Wenbo Guo
Zhen Xiang
Jinghui Chen
Yue Liu
Dawn Song
AAML
334
1
0
09 Dec 2024
Robust and Transferable Backdoor Attacks Against Deep Image Compression
  With Selective Frequency Prior
Robust and Transferable Backdoor Attacks Against Deep Image Compression With Selective Frequency PriorIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024
Yi Yu
Yufei Wang
Wenhan Yang
Lanqing Guo
Shijian Lu
Ling-yu Duan
Yap-Peng Tan
Alex C. Kot
AAML
271
12
0
02 Dec 2024
BadScan: An Architectural Backdoor Attack on Visual State Space Models
BadScan: An Architectural Backdoor Attack on Visual State Space Models
Om Suhas Deshmukh
Sankalp Nagaonkar
A. Tripathi
Ashish Mishra
Mamba
259
0
0
26 Nov 2024
SoK: A Systems Perspective on Compound AI Threats and Countermeasures
SoK: A Systems Perspective on Compound AI Threats and Countermeasures
Sarbartha Banerjee
Prateek Sahu
Mulong Luo
Anjo Vahldiek-Oberwagner
N. Yadwadkar
Mohit Tiwari
AAML
301
3
0
20 Nov 2024
Data Poisoning-based Backdoor Attack Framework against Supervised
  Learning Rules of Spiking Neural Networks
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks
Lingxin Jin
Meiyu Lin
Wei Jiang
Jinyu Zhan
AAMLSILM
154
3
0
24 Sep 2024
CAMH: Advancing Model Hijacking Attack in Machine Learning
CAMH: Advancing Model Hijacking Attack in Machine LearningAAAI Conference on Artificial Intelligence (AAAI), 2024
Xing He
Jiahao Chen
Yuwen Pu
Qingming Li
Chunyi Zhou
Yingcai Wu
Jinbao Li
Shouling Ji
125
0
0
25 Aug 2024
A Practical Trigger-Free Backdoor Attack on Neural Networks
A Practical Trigger-Free Backdoor Attack on Neural Networks
Jiahao Wang
Xianglong Zhang
Xiuzhen Cheng
Pengfei Hu
Guoming Zhang
AAML
171
1
0
21 Aug 2024
Towards Physical World Backdoor Attacks against Skeleton Action
  Recognition
Towards Physical World Backdoor Attacks against Skeleton Action RecognitionEuropean Conference on Computer Vision (ECCV), 2024
Qichen Zheng
Yi Yu
Siyuan Yang
Jun Liu
Kwok-Yan Lam
Alex C. Kot
AAML
231
7
0
16 Aug 2024
A Survey of Trojan Attacks and Defenses to Deep Neural Networks
A Survey of Trojan Attacks and Defenses to Deep Neural Networks
Lingxin Jin
Xianyu Wen
Wei Jiang
Jinyu Zhan
AAML
220
3
0
15 Aug 2024
DeepBaR: Fault Backdoor Attack on Deep Neural Network Layers
DeepBaR: Fault Backdoor Attack on Deep Neural Network Layers
Camilo A. Mart´ınez-Mej´ıa
Jesus Solano
J. Breier
Dominik Bucko
Xiaolu Hou
AAML
149
0
0
30 Jul 2024
IPA-NeRF: Illusory Poisoning Attack Against Neural Radiance Fields
IPA-NeRF: Illusory Poisoning Attack Against Neural Radiance Fields
Wenxiang Jiang
Hanwei Zhang
Shuo Zhao
Zhongwen Guo
China
AAML
275
3
0
16 Jul 2024
RMF: A Risk Measurement Framework for Machine Learning Models
RMF: A Risk Measurement Framework for Machine Learning ModelsARES (ARES), 2024
Jan Schröder
Jakub Breier
124
1
0
15 Jun 2024
AI Risk Management Should Incorporate Both Safety and Security
AI Risk Management Should Incorporate Both Safety and Security
Xiangyu Qi
Yangsibo Huang
Yi Zeng
Edoardo Debenedetti
Jonas Geiping
...
Chaowei Xiao
Yue Liu
Dawn Song
Peter Henderson
Prateek Mittal
AAML
271
19
0
29 May 2024
DeepNcode: Encoding-Based Protection against Bit-Flip Attacks on Neural
  Networks
DeepNcode: Encoding-Based Protection against Bit-Flip Attacks on Neural Networks
Patrik Velcický
J. Breier
Mladen Kovacevic
Xiaolu Hou
AAML
185
2
0
22 May 2024
Adversarial Robustness for Visual Grounding of Multimodal Large Language
  Models
Adversarial Robustness for Visual Grounding of Multimodal Large Language Models
Kuofeng Gao
Yang Bai
Jiawang Bai
Yong Yang
Shu-Tao Xia
AAML
221
25
0
16 May 2024
Test-Time Backdoor Attacks on Multimodal Large Language Models
Test-Time Backdoor Attacks on Multimodal Large Language Models
Dong Lu
Tianyu Pang
Chao Du
Qian Liu
Xianjun Yang
Min Lin
AAML
352
36
0
13 Feb 2024
Game of Trojans: Adaptive Adversaries Against Output-based
  Trojaned-Model Detectors
Game of Trojans: Adaptive Adversaries Against Output-based Trojaned-Model Detectors
D. Sahabandu
Xiaojun Xu
Arezoo Rajabi
Luyao Niu
Bhaskar Ramasubramanian
Bo Li
Radha Poovendran
AAML
168
1
0
12 Feb 2024
DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via
  Diffusion Models
DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models
Jiachen Zhou
Peizhuo Lv
Yibing Lan
Guozhu Meng
Kai Chen
Hualong Ma
AAML
281
13
0
18 Dec 2023
Generating Visually Realistic Adversarial Patch
Generating Visually Realistic Adversarial Patch
Xiaosen Wang
Kunyu Wang
AAML
181
1
0
05 Dec 2023
Attacking Graph Neural Networks with Bit Flips: Weisfeiler and Lehman Go
  Indifferent
Attacking Graph Neural Networks with Bit Flips: Weisfeiler and Lehman Go Indifferent
Lorenz Kummer
Samir Moustafa
Nils N. Kriege
Wilfried N. Gansterer
GNNAAML
173
0
0
02 Nov 2023
Label Poisoning is All You Need
Label Poisoning is All You NeedNeural Information Processing Systems (NeurIPS), 2023
Rishi Jha
J. Hayase
Sewoong Oh
AAML
235
42
0
29 Oct 2023
Safe and Robust Watermark Injection with a Single OoD Image
Safe and Robust Watermark Injection with a Single OoD ImageInternational Conference on Learning Representations (ICLR), 2023
Shuyang Yu
Junyuan Hong
Haobo Zhang
Haotao Wang
Zinan Lin
Jiayu Zhou
WIGM
268
3
0
04 Sep 2023
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor
  Attack
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack
Sze Jue Yang
Q. Nguyen
Chee Seng Chan
Khoa D. Doan
AAMLDiffM
327
0
0
31 Aug 2023
One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training
One-bit Flip is All You Need: When Bit-flip Attack Meets Model TrainingIEEE International Conference on Computer Vision (ICCV), 2023
Jianshuo Dong
Han Qiu
Yiming Li
Tianwei Zhang
Yuan-Fang Li
Zeqi Lai
Chao Zhang
Shutao Xia
AAML
124
27
0
12 Aug 2023
Backdoor Federated Learning by Poisoning Backdoor-Critical Layers
Backdoor Federated Learning by Poisoning Backdoor-Critical LayersInternational Conference on Learning Representations (ICLR), 2023
Haomin Zhuang
Mingxian Yu
Hao Wang
Yang Hua
Jian Li
Xu Yuan
FedML
181
26
0
08 Aug 2023
TIJO: Trigger Inversion with Joint Optimization for Defending Multimodal
  Backdoored Models
TIJO: Trigger Inversion with Joint Optimization for Defending Multimodal Backdoored ModelsIEEE International Conference on Computer Vision (ICCV), 2023
Indranil Sur
Karan Sikka
Matthew Walmer
K. Koneripalli
Anirban Roy
Xiaoyu Lin
Ajay Divakaran
Susmit Jha
143
12
0
07 Aug 2023
Beating Backdoor Attack at Its Own Game
Beating Backdoor Attack at Its Own GameIEEE International Conference on Computer Vision (ICCV), 2023
Min Liu
Alberto L. Sangiovanni-Vincentelli
Xiangyu Yue
AAML
492
13
0
28 Jul 2023
OVLA: Neural Network Ownership Verification using Latent Watermarks
OVLA: Neural Network Ownership Verification using Latent Watermarks
Feisi Fu
Wenchao Li
AAML
213
1
0
15 Jun 2023
Efficient Backdoor Attacks for Deep Neural Networks in Real-world
  Scenarios
Efficient Backdoor Attacks for Deep Neural Networks in Real-world ScenariosInternational Conference on Learning Representations (ICLR), 2023
Wandi Qiao
Hong Sun
Pengfei Xia
Heng Li
Beihao Xia
Yi Wu
Bin Li
AAML
276
11
0
14 Jun 2023
A Proxy Attack-Free Strategy for Practically Improving the Poisoning
  Efficiency in Backdoor Attacks
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor AttacksIEEE Transactions on Information Forensics and Security (IEEE TIFS), 2023
Wandi Qiao
Hong Sun
Pengfei Xia
Beihao Xia
Xue Rui
Wei Zhang
Qinglang Guo
Bin Li
AAML
277
11
0
14 Jun 2023
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation
  Study
Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study
Yiqi Zhong
Xianming Liu
Deming Zhai
Junjun Jiang
Xiang Ji
AAML
90
3
0
28 May 2023
Decision-based iterative fragile watermarking for model integrity
  verification
Decision-based iterative fragile watermarking for model integrity verification
Z. Yin
Heng Yin
Hang Su
Xinpeng Zhang
Zhenzhe Gao
AAML
222
4
0
13 May 2023
Exploring the Landscape of Machine Unlearning: A Comprehensive Survey
  and Taxonomy
Exploring the Landscape of Machine Unlearning: A Comprehensive Survey and TaxonomyIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2023
T. Shaik
Xiaohui Tao
Haoran Xie
Lin Li
Xiaofeng Zhu
Qingyuan Li
MU
464
51
0
10 May 2023
Influencer Backdoor Attack on Semantic Segmentation
Influencer Backdoor Attack on Semantic SegmentationInternational Conference on Learning Representations (ICLR), 2023
Haoheng Lan
Jindong Gu
Juil Sock
Hengshuang Zhao
AAML
260
9
0
21 Mar 2023
TrojText: Test-time Invisible Textual Trojan Insertion
TrojText: Test-time Invisible Textual Trojan InsertionInternational Conference on Learning Representations (ICLR), 2023
Qiang Lou
Ye Liu
Bo Feng
325
34
0
03 Mar 2023
Backdoor Attacks Against Deep Image Compression via Adaptive Frequency
  Trigger
Backdoor Attacks Against Deep Image Compression via Adaptive Frequency TriggerComputer Vision and Pattern Recognition (CVPR), 2023
Yi Yu
Yufei Wang
Wenhan Yang
Shijian Lu
Yap-Peng Tan
Alex C. Kot
291
58
0
28 Feb 2023
Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks
Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural NetworksUSENIX Security Symposium (USENIX Security), 2023
Jialai Wang
Ziyuan Zhang
Meiqi Wang
Han Qiu
Tianwei Zhang
Qi Li
Zongpeng Li
Tao Wei
Chao Zhang
AAML
195
34
0
27 Feb 2023
Attacks in Adversarial Machine Learning: A Systematic Survey from the
  Life-cycle Perspective
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
441
32
0
19 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILMAAML
187
21
0
14 Feb 2023
BDMMT: Backdoor Sample Detection for Language Models through Model
  Mutation Testing
BDMMT: Backdoor Sample Detection for Language Models through Model Mutation TestingIEEE Transactions on Information Forensics and Security (IEEE TIFS), 2023
Jiali Wei
Ming Fan
Wenjing Jiao
Wuxia Jin
Ting Liu
AAML
174
26
0
25 Jan 2023
Federated Learning for Energy Constrained IoT devices: A systematic
  mapping study
Federated Learning for Energy Constrained IoT devices: A systematic mapping studyCluster Computing (CC), 2022
Rachid El Mokadem
Yann Ben Maissa
Zineb El Akkaoui
150
9
0
09 Jan 2023
123
Next