ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.15929
  4. Cited By
Backdoor Vulnerabilities in Normally Trained Deep Learning Models

Backdoor Vulnerabilities in Normally Trained Deep Learning Models

29 November 2022
Guanhong Tao
Zhenting Wang
Shuyang Cheng
Shiqing Ma
Shengwei An
Yingqi Liu
Guangyu Shen
Zhuo Zhang
Yunshu Mao
Xiangyu Zhang
    SILM
ArXiv (abs)PDFHTML

Papers citing "Backdoor Vulnerabilities in Normally Trained Deep Learning Models"

10 / 10 papers shown
Title
Backdoor Vectors: a Task Arithmetic View on Backdoor Attacks and Defenses
Backdoor Vectors: a Task Arithmetic View on Backdoor Attacks and Defenses
Stanisław Pawlak
Jan Dubiñski
Daniel Marczak
Bartłomiej Twardowski
AAMLMoMe
224
0
0
09 Oct 2025
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data
Dorde Popovic
Amin Sadeghi
Ting Yu
Sanjay Chawla
Issa M. Khalil
AAML
305
1
0
27 Mar 2025
UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening
UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening
Shuyang Cheng
Guangyu Shen
Kaiyuan Zhang
Guanhong Tao
Shengwei An
Hanxi Guo
Shiqing Ma
Xiangyu Zhang
AAML
224
0
0
16 Jul 2024
Invisible Backdoor Attack against Self-supervised Learning
Invisible Backdoor Attack against Self-supervised LearningComputer Vision and Pattern Recognition (CVPR), 2024
Hanrong Zhang
Zhenting Wang
Tingxu Han
Haoyang Ling
Chenlu Zhan
Jundong Li
Hongwei Wang
Shiqing Ma
Hongwei Wang
Shiqing Ma
AAMLSSL
309
1
0
23 May 2024
How to Trace Latent Generative Model Generated Images without Artificial
  Watermark?
How to Trace Latent Generative Model Generated Images without Artificial Watermark?
Zhenting Wang
Vikash Sehwag
Chen Chen
Lingjuan Lyu
Dimitris N. Metaxas
Shiqing Ma
WIGM
263
18
0
22 May 2024
Alteration-free and Model-agnostic Origin Attribution of Generated
  Images
Alteration-free and Model-agnostic Origin Attribution of Generated Images
Zhenting Wang
Chen Chen
Yi Zeng
Lingjuan Lyu
Shiqing Ma
177
6
0
29 May 2023
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Kai Mei
Zheng Li
Zhenting Wang
Yang Zhang
Shiqing Ma
AAMLSILM
201
57
0
28 May 2023
UNICORN: A Unified Backdoor Trigger Inversion Framework
UNICORN: A Unified Backdoor Trigger Inversion FrameworkInternational Conference on Learning Representations (ICLR), 2023
Zhenting Wang
Kai Mei
Juan Zhai
Shiqing Ma
LLMSV
179
58
0
05 Apr 2023
Mithridates: Auditing and Boosting Backdoor Resistance of Machine
  Learning Pipelines
Mithridates: Auditing and Boosting Backdoor Resistance of Machine Learning PipelinesConference on Computer and Communications Security (CCS), 2023
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
295
3
0
09 Feb 2023
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better DefenseNetwork and Distributed System Security Symposium (NDSS), 2023
Shuyang Cheng
Guanhong Tao
Yingqi Liu
Shengwei An
Xiangzhe Xu
...
Guangyu Shen
Kaiyuan Zhang
Qiuling Xu
Shiqing Ma
Xiangyu Zhang
AAML
202
20
0
16 Jan 2023
1