ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.00191
  4. Cited By
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved
  Transferability

Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability

1 May 2020
H. Aghakhani
Dongyu Meng
Yu-Xiang Wang
Christopher Kruegel
Giovanni Vigna
    AAML
ArXivPDFHTML

Papers citing "Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability"

15 / 15 papers shown
Title
Machine Unlearning Fails to Remove Data Poisoning Attacks
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk
Jimmy Z. Di
Yiwei Lu
Gautam Kamath
Ayush Sekhari
Seth Neel
AAML
MU
57
8
0
25 Jun 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of
  Energy-Based Models
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
31
0
0
28 May 2024
Transferable Availability Poisoning Attacks
Transferable Availability Poisoning Attacks
Yiyong Liu
Michael Backes
Xiao Zhang
AAML
19
3
0
08 Oct 2023
Edge Learning for 6G-enabled Internet of Things: A Comprehensive Survey
  of Vulnerabilities, Datasets, and Defenses
Edge Learning for 6G-enabled Internet of Things: A Comprehensive Survey of Vulnerabilities, Datasets, and Defenses
M. Ferrag
Othmane Friha
B. Kantarci
Norbert Tihanyi
Lucas C. Cordeiro
Merouane Debbah
Djallel Hamouda
Muna Al-Hawawreh
K. Choo
23
43
0
17 Jun 2023
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
H. Aghakhani
Wei Dai
Andre Manoel
Xavier Fernandes
Anant Kharkar
Christopher Kruegel
Giovanni Vigna
David E. Evans
B. Zorn
Robert Sim
SILM
21
33
0
06 Jan 2023
Friendly Noise against Adversarial Noise: A Powerful Defense against
  Data Poisoning Attacks
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
20
27
0
14 Aug 2022
Adversarial attacks and defenses in Speaker Recognition Systems: A
  survey
Adversarial attacks and defenses in Speaker Recognition Systems: A survey
Jiahe Lan
Rui Zhang
Zheng Yan
Jie Wang
Yu Chen
Ronghui Hou
AAML
24
23
0
27 May 2022
The MeVer DeepFake Detection Service: Lessons Learnt from Developing and
  Deploying in the Wild
The MeVer DeepFake Detection Service: Lessons Learnt from Developing and Deploying in the Wild
Spyridon Baxevanakis
Giorgos Kordopatis-Zilos
Panagiotis Galopoulos
Lazaros Apostolidis
Killian Levacher
Ipek B. Schlicht
Denis Teyssou
I. Kompatsiaris
Symeon Papadopoulos
34
8
0
27 Apr 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
30
24
0
19 Apr 2022
WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
Yunjie Ge
Qianqian Wang
Jingfeng Zhang
Juntao Zhou
Yunzhu Zhang
Chao Shen
AAML
18
6
0
25 Mar 2022
Low-Rank Softmax Can Have Unargmaxable Classes in Theory but Rarely in
  Practice
Low-Rank Softmax Can Have Unargmaxable Classes in Theory but Rarely in Practice
Andreas Grivas
Nikolay Bogoychev
Adam Lopez
11
9
0
12 Mar 2022
Disrupting Model Training with Adversarial Shortcuts
Disrupting Model Training with Adversarial Shortcuts
Ivan Evtimov
Ian Covert
Aditya Kusupati
Tadayoshi Kohno
AAML
15
10
0
12 Jun 2021
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
W. R. Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
19
215
0
04 Sep 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
  Data Poisoning Attacks
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAML
TDI
13
162
0
22 Jun 2020
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
297
10,220
0
16 Nov 2016
1