Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2006.14768
Cited By
v1
v2 (latest)
Deep Partition Aggregation: Provable Defense against General Poisoning Attacks
26 June 2020
Alexander Levine
Soheil Feizi
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Deep Partition Aggregation: Provable Defense against General Poisoning Attacks"
50 / 106 papers shown
On Robustness of Linear Classifiers to Targeted Data Poisoning
Nakshatra Gupta
Sumanth Prabhu
Supratik Chakraborty
R Venkatesh
OOD
AAML
223
0
0
16 Nov 2025
Provably Robust Adaptation for Language-Empowered Foundation Models
Y. Lai
Xiaoyu Xue
Linghui Shen
Yulun Wu
Gaolei Li
Song Guo
Kai Zhou
Bin Xiao
AAML
204
1
0
09 Oct 2025
Reconcile Certified Robustness and Accuracy for DNN-based Smoothed Majority Vote Classifier
Gaojie Jin
Xinping Yi
Xiaowei Huang
AAML
174
1
0
30 Sep 2025
Multi-level Certified Defense Against Poisoning Attacks in Offline Reinforcement Learning
International Conference on Learning Representations (ICLR), 2025
Shijie Liu
Andrew C. Cullen
Paul Montague
S. Erfani
Benjamin I. P. Rubinstein
OffRL
AAML
293
4
0
27 May 2025
Cert-SSBD: Certified Backdoor Defense with Sample-Specific Smoothing Noises
Ting Qiao
Longji Xu
Xing Liu
Sixing Wu
Jianbing Li
Yiming Li
AAML
SILM
568
0
0
30 Apr 2025
Frontier AI's Impact on the Cybersecurity Landscape
Wenbo Guo
Wenbo Guo
Tianneng Shi
Yu Yang
Andy Zhang
Patrick Gage Kelley
Kurt Thomas
Kurt Thomas
Dawn Song
613
12
0
07 Apr 2025
Practical Poisoning Attacks against Retrieval-Augmented Generation
Baolei Zhang
Yuxiao Chen
Minghong Fang
Zhuqing Liu
Lihai Nie
Tong Li
Zheli Liu
SILM
AAML
516
22
0
04 Apr 2025
Deterministic Certification of Graph Neural Networks against Graph Poisoning Attacks with Arbitrary Perturbations
Computer Vision and Pattern Recognition (CVPR), 2025
Jiate Li
Meng Pang
Yun Dong
Binghui Wang
AAML
358
1
0
24 Mar 2025
Principal Eigenvalue Regularization for Improved Worst-Class Certified Robustness of Smoothed Classifiers
Gaojie Jin
Tianjin Huang
Ronghui Mu
Xiaowei Huang
AAML
394
0
0
21 Mar 2025
Trust Under Siege: Label Spoofing Attacks against Machine Learning for Android Malware Detection
Tianwei Lan
Luca Demetrio
Farid Nait-Abdesselam
Yufei Han
Simone Aonzo
AAML
369
4
0
14 Mar 2025
AGNNCert: Defending Graph Neural Networks against Arbitrary Perturbations with Deterministic Certification
Jiate Li
Binghui Wang
AAML
401
3
0
02 Feb 2025
Game-Theoretic Defenses for Robust Conformal Prediction Against Adversarial Attacks in Medical Imaging
Rui Luo
Jie Bao
Zhixin Zhou
Chuangyin Dang
MedIm
AAML
601
10
0
07 Nov 2024
Timber! Poisoning Decision Trees
Stefano Calzavara
Lorenzo Cazzaro
Massimo Vettori
AAML
378
1
0
01 Oct 2024
Psychometrics for Hypnopaedia-Aware Machinery via Chaotic Projection of Artificial Mental Imagery
Ching-Chun Chang
Kai Gao
Shuying Xu
Anastasia Kordoni
Christopher Leckie
Isao Echizen
199
0
0
29 Sep 2024
The poison of dimensionality
Lê-Nguyên Hoang
381
3
0
25 Sep 2024
UTrace: Poisoning Forensics for Private Collaborative Learning
Evan Rose
Hidde Lycklama
Harsh Chaudhari
Niklas Britz
Anwar Hithnawi
Alina Oprea
541
2
0
23 Sep 2024
Backdoor defense, learnability and obfuscation
Information Technology Convergence and Services (ITCS), 2024
Paul Christiano
Jacob Hilton
Victor Lecomte
Mark Xu
AAML
354
3
0
04 Sep 2024
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
Lukas Gosch
Mahalakshmi Sabanayagam
Debarghya Ghoshdastidar
Stephan Günnemann
AAML
640
6
0
15 Jul 2024
Augmented Neural Fine-Tuning for Efficient Backdoor Purification
Nazmul Karim
Abdullah Al Arafat
Umar Khalid
Zhishan Guo
Nazanin Rahnavard
AAML
337
8
0
14 Jul 2024
Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses
Yuxin Yang
Qiang Li
Jinyuan Jia
Yuan Hong
Binghui Wang
AAML
FedML
265
23
0
12 Jul 2024
Model-agnostic clean-label backdoor mitigation in cybersecurity environments
Giorgio Severi
Simona Boboila
J. Holodnak
K. Kratkiewicz
Rauf Izmailov
Alina Oprea
Alina Oprea
AAML
532
1
0
11 Jul 2024
FullCert: Deterministic End-to-End Certification for Training and Inference of Neural Networks
Tobias Lorenz
Marta Kwiatkowska
Mario Fritz
AAML
198
3
0
17 Jun 2024
Certified Robustness to Data Poisoning in Gradient-Based Training
Philip Sosnin
Mark N. Müller
Maximilian Baader
Calvin Tsay
Matthew Wicker
AAML
SILM
321
19
0
09 Jun 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
365
0
0
28 May 2024
PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics
Sunay Bhat
Jeffrey Q. Jiang
Omead Brandon Pooladzandi
Alexander Branch
Gregory Pottie
AAML
424
4
0
28 May 2024
Certifiably Robust RAG against Retrieval Corruption
Chong Xiang
Tong Wu
Zexuan Zhong
David Wagner
Danqi Chen
Prateek Mittal
SILM
AAML
RALM
340
104
0
24 May 2024
Leakage-Resilient and Carbon-Neutral Aggregation Featuring the Federated AI-enabled Critical Infrastructure
Zehang Deng
Ruoxi Sun
Minhui Xue
Sheng Wen
S. Çamtepe
Surya Nepal
Yang Xiang
282
12
0
24 May 2024
Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search
Zachary Coalson
Huazheng Wang
Qingyun Wu
Sanghyun Hong
AAML
OOD
324
0
0
09 May 2024
FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models
Yanting Wang
Wei Zou
Jinyuan Jia
295
4
0
12 Apr 2024
Have You Poisoned My Data? Defending Neural Networks against Data Poisoning
Fabio De Gaspari
Dorjan Hitaj
Luigi V. Mancini
AAML
TDI
220
11
0
20 Mar 2024
Certified Robustness to Clean-Label Poisoning Using Diffusion Denoising
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
DiffM
377
3
0
18 Mar 2024
A general approach to enhance the survivability of backdoor attacks by decision path coupling
Yufei Zhao
Dingji Wang
Bihuan Chen
Ziqian Chen
Xin Peng
AAML
282
0
0
05 Mar 2024
PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models
Wei Zou
Runpeng Geng
Binghui Wang
Jinyuan Jia
SILM
577
45
1
12 Feb 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
447
58
0
02 Feb 2024
Attacking Byzantine Robust Aggregation in High Dimensions
Sarthak Choudhary
Aashish Kolluri
Prateek Saxena
AAML
274
4
0
22 Dec 2023
Detection and Defense of Unlearnable Examples
AAAI Conference on Artificial Intelligence (AAAI), 2023
Yifan Zhu
Lijia Yu
Xiao-Shan Gao
AAML
289
11
0
14 Dec 2023
Node-aware Bi-smoothing: Certified Robustness against Graph Injection Attacks
Y. Lai
Yulin Zhu
Bailin Pan
Wei Song
AAML
441
11
0
07 Dec 2023
Mendata: A Framework to Purify Manipulated Training Data
Zonghao Huang
Neil Zhenqiang Gong
Michael K. Reiter
338
0
0
03 Dec 2023
Continuous Management of Machine Learning-Based Application Behavior
IEEE Transactions on Services Computing (TSC), 2023
M. Anisetti
C. Ardagna
Nicola Bena
Ernesto Damiani
Paolo G. Panero
KELM
228
1
0
21 Nov 2023
TextGuard: Provable Defense against Backdoor Attacks on Text Classification
Hengzhi Pei
Jinyuan Jia
Wenbo Guo
Yue Liu
Dawn Song
SILM
385
23
0
19 Nov 2023
ERASER: Machine Unlearning in MLaaS via an Inference Serving-Aware Approach
Conference on Computer and Communications Security (CCS), 2023
Yuke Hu
Jian Lou
Jiaqi Liu
Wangze Ni
Feng Lin
Zhan Qin
Kui Ren
MU
440
27
0
03 Nov 2023
Poison is Not Traceless: Fully-Agnostic Detection of Poisoning Attacks
Xinglong Chang
Katharina Dost
Gill Dobbie
Jörg Simon Wicker
AAML
257
1
0
24 Oct 2023
PETA: Parameter-Efficient Trojan Attacks
Lauren Hong
Ting Wang
AAML
530
1
0
01 Oct 2023
Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks
AAAI Conference on Artificial Intelligence (AAAI), 2023
Shijie Liu
Andrew C. Cullen
Paul Montague
S. Erfani
Benjamin I. P. Rubinstein
AAML
329
7
0
15 Aug 2023
Rethinking Backdoor Attacks
International Conference on Machine Learning (ICML), 2023
Alaa Khaddaj
Guillaume Leclerc
Aleksandar Makelov
Kristian Georgiev
Hadi Salman
Andrew Ilyas
Aleksander Madry
SILM
274
41
0
19 Jul 2023
Systematic Testing of the Data-Poisoning Robustness of KNN
International Symposium on Software Testing and Analysis (ISSTA), 2023
Yannan Li
Jingbo Wang
Chao Wang
AAML
OOD
183
7
0
17 Jul 2023
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?
Neural Information Processing Systems (NeurIPS), 2023
Fnu Suya
X. Zhang
Yuan Tian
David Evans
OOD
AAML
363
3
0
03 Jul 2023
On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks
Wenxiao Wang
Soheil Feizi
AAML
252
1
0
28 Jun 2023
Adversarial Resilience in Sequential Prediction via Abstention
Neural Information Processing Systems (NeurIPS), 2023
Surbhi Goel
Steve Hanneke
Shay Moran
Abhishek Shetty
362
14
0
22 Jun 2023
Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems
Workshop on Representation Learning for NLP (RepL4NLP), 2023
Ashim Gupta
Amrith Krishna
AAML
251
20
0
31 May 2023
1
2
3
Next
Page 1 of 3