Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.18933
Cited By
Label Poisoning is All You Need
29 October 2023
Rishi Jha
J. Hayase
Sewoong Oh
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Label Poisoning is All You Need"
10 / 10 papers shown
Title
Robustness of Selected Learning Models under Label-Flipping Attack
Sarvagya Bhargava
Mark Stamp
AAML
66
0
0
21 Jan 2025
CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models
Songning Lai
Jiayu Yang
Yu Huang
Lijie Hu
Tianlang Xue
Zhangyi Hu
Jiaxu Li
Haicheng Liao
Yutao Yue
16
1
0
07 Oct 2024
Domain Watermark: Effective and Harmless Dataset Copyright Protection is Closed at Hand
Junfeng Guo
Yiming Li
Lixu Wang
Shu-Tao Xia
Heng-Chiao Huang
Cong Liu
Boheng Li
30
50
0
09 Oct 2023
Adversarial Illusions in Multi-Modal Embeddings
Tingwei Zhang
Rishi Jha
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
24
8
0
22 Aug 2023
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
44
21
0
19 Feb 2023
Enhancing Backdoor Attacks with Multi-Level MMD Regularization
Pengfei Xia
Hongjing Niu
Ziqiang Li
Bin Li
AAML
38
29
0
09 Nov 2021
Manipulating SGD with Data Ordering Attacks
Ilia Shumailov
Zakhar Shumaylov
Dmitry Kazhdan
Yiren Zhao
Nicolas Papernot
Murat A. Erdogdu
Ross J. Anderson
AAML
112
90
0
19 Apr 2021
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection
Yuanchun Li
Jiayi Hua
Haoyu Wang
Chunyang Chen
Yunxin Liu
FedML
SILM
86
75
0
18 Jan 2021
Clean-Label Backdoor Attacks on Video Recognition Models
Shihao Zhao
Xingjun Ma
Xiang Zheng
James Bailey
Jingjing Chen
Yu-Gang Jiang
AAML
185
274
0
06 Mar 2020
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
168
286
0
02 Dec 2018
1