ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.06975
  4. Cited By
Technical Report: When Does Machine Learning FAIL? Generalized
  Transferability for Evasion and Poisoning Attacks

Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks

19 March 2018
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
    AAML
ArXivPDFHTML

Papers citing "Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks"

50 / 132 papers shown
Title
A Client-level Assessment of Collaborative Backdoor Poisoning in Non-IID Federated Learning
A Client-level Assessment of Collaborative Backdoor Poisoning in Non-IID Federated Learning
Phung Lai
Guanxiong Liu
Hai Phan
Issa M. Khalil
Abdallah Khreishah
Xintao Wu
FedML
36
0
0
17 Apr 2025
Like Oil and Water: Group Robustness Methods and Poisoning Defenses May Be at Odds
Like Oil and Water: Group Robustness Methods and Poisoning Defenses May Be at Odds
Michael-Andrei Panaitescu-Liess
Yigitcan Kaya
Sicheng Zhu
Furong Huang
Tudor Dumitras
AAML
37
0
0
02 Apr 2025
Data Poisoning in Deep Learning: A Survey
Data Poisoning in Deep Learning: A Survey
Pinlong Zhao
Weiyao Zhu
Pengfei Jiao
Di Gao
Ou Wu
AAML
39
0
0
27 Mar 2025
PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models
Michael-Andrei Panaitescu-Liess
Pankayaraj Pathmanathan
Yigitcan Kaya
Zora Che
Bang An
Sicheng Zhu
Aakriti Agrawal
Furong Huang
AAML
59
0
0
10 Mar 2025
Iterative Window Mean Filter: Thwarting Diffusion-based Adversarial
  Purification
Iterative Window Mean Filter: Thwarting Diffusion-based Adversarial Purification
Hanrui Wang
Ruoxi Sun
Cunjian Chen
Minhui Xue
Lay-Ki Soon
Shuo Wang
Zhe Jin
DiffM
AAML
27
2
0
20 Aug 2024
Breaking Agents: Compromising Autonomous LLM Agents Through Malfunction
  Amplification
Breaking Agents: Compromising Autonomous LLM Agents Through Malfunction Amplification
Boyang Zhang
Yicong Tan
Yun Shen
Ahmed Salem
Michael Backes
Savvas Zannettou
Yang Zhang
LLMAG
AAML
44
14
0
30 Jul 2024
Provable Robustness of (Graph) Neural Networks Against Data Poisoning
  and Backdoor Attacks
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
Lukas Gosch
Mahalakshmi Sabanayagam
D. Ghoshdastidar
Stephan Günnemann
AAML
38
3
0
15 Jul 2024
Dye4AI: Assuring Data Boundary on Generative AI Services
Dye4AI: Assuring Data Boundary on Generative AI Services
Shu Wang
Kun Sun
Yan Zhai
34
1
0
20 Jun 2024
Phantom: General Trigger Attacks on Retrieval Augmented Language
  Generation
Phantom: General Trigger Attacks on Retrieval Augmented Language Generation
Harsh Chaudhari
Giorgio Severi
John Abascal
Matthew Jagielski
Christopher A. Choquette-Choo
Milad Nasr
Cristina Nita-Rotaru
Alina Oprea
SILM
AAML
72
28
0
30 May 2024
Defending against Data Poisoning Attacks in Federated Learning via User
  Elimination
Defending against Data Poisoning Attacks in Federated Learning via User Elimination
Nick Galanis
AAML
31
2
0
19 Apr 2024
Algorithmic Complexity Attacks on Dynamic Learned Indexes
Algorithmic Complexity Attacks on Dynamic Learned Indexes
Rui Yang
Evgenios M. Kornaropoulos
Yue Cheng
AAML
39
2
0
19 Mar 2024
Diffusion Denoising as a Certified Defense against Clean-label Poisoning
Diffusion Denoising as a Certified Defense against Clean-label Poisoning
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
DiffM
45
3
0
18 Mar 2024
On the Conflict of Robustness and Learning in Collaborative Machine
  Learning
On the Conflict of Robustness and Learning in Collaborative Machine Learning
Mathilde Raynal
Carmela Troncoso
27
2
0
21 Feb 2024
Preference Poisoning Attacks on Reward Model Learning
Preference Poisoning Attacks on Reward Model Learning
Junlin Wu
Jiong Wang
Chaowei Xiao
Chenguang Wang
Ning Zhang
Yevgeniy Vorobeychik
AAML
24
5
0
02 Feb 2024
Logit Poisoning Attack in Distillation-based Federated Learning and its
  Countermeasures
Logit Poisoning Attack in Distillation-based Federated Learning and its Countermeasures
Yonghao Yu
Shunan Zhu
Jinglu Hu
AAML
FedML
17
0
0
31 Jan 2024
MalPurifier: Enhancing Android Malware Detection with Adversarial
  Purification against Evasion Attacks
MalPurifier: Enhancing Android Malware Detection with Adversarial Purification against Evasion Attacks
Yuyang Zhou
Guang Cheng
Zongyao Chen
Shui Yu
AAML
33
4
0
11 Dec 2023
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Guangjing Wang
Ce Zhou
Yuanda Wang
Bocheng Chen
Hanqing Guo
Qiben Yan
AAML
SILM
57
3
0
20 Nov 2023
Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language
  Models
Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language Models
Yueqing Liang
Lu Cheng
Ali Payani
Kai Shu
20
3
0
15 Nov 2023
Unscrambling the Rectification of Adversarial Attacks Transferability
  across Computer Networks
Unscrambling the Rectification of Adversarial Attacks Transferability across Computer Networks
Ehsan Nowroozi
Samaneh Ghelichkhani
Imran Haider
Ali Dehghantanha
AAML
21
0
0
26 Oct 2023
On the Proactive Generation of Unsafe Images From Text-To-Image Models Using Benign Prompts
On the Proactive Generation of Unsafe Images From Text-To-Image Models Using Benign Prompts
Yixin Wu
Ning Yu
Michael Backes
Yun Shen
Yang Zhang
DiffM
51
8
0
25 Oct 2023
RAI4IoE: Responsible AI for Enabling the Internet of Energy
RAI4IoE: Responsible AI for Enabling the Internet of Energy
Minhui Xue
Surya Nepal
Ling Liu
Subbu Sethuvenkatraman
Xingliang Yuan
Carsten Rudolph
Ruoxi Sun
Greg Eisenhauer
27
4
0
20 Sep 2023
Efficient Query-Based Attack against ML-Based Android Malware Detection
  under Zero Knowledge Setting
Efficient Query-Based Attack against ML-Based Android Malware Detection under Zero Knowledge Setting
Ping He
Yifan Xia
Xuhong Zhang
Shouling Ji
AAML
18
11
0
05 Sep 2023
Dropout Attacks
Dropout Attacks
Andrew Yuan
Alina Oprea
Cheng Tan
17
0
0
04 Sep 2023
Systematic Testing of the Data-Poisoning Robustness of KNN
Systematic Testing of the Data-Poisoning Robustness of KNN
Yannan Li
Jingbo Wang
Chao Wang
AAML
OOD
24
6
0
17 Jul 2023
Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial
  Transferability
Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial Transferability
Marco Alecci
Mauro Conti
Francesco Marchiori
L. Martinelli
Luca Pajola
AAML
24
7
0
27 Jun 2023
Query-Free Evasion Attacks Against Machine Learning-Based Malware
  Detectors with Generative Adversarial Networks
Query-Free Evasion Attacks Against Machine Learning-Based Malware Detectors with Generative Adversarial Networks
Daniel Gibert
Jordi Planes
Quan Le
Giulio Zizzo
AAML
30
8
0
16 Jun 2023
Avoid Adversarial Adaption in Federated Learning by Multi-Metric
  Investigations
Avoid Adversarial Adaption in Federated Learning by Multi-Metric Investigations
T. Krauß
Alexandra Dmitrienko
AAML
16
4
0
06 Jun 2023
Enhancing Vulnerability Prioritization: Data-Driven Exploit Predictions
  with Community-Driven Insights
Enhancing Vulnerability Prioritization: Data-Driven Exploit Predictions with Community-Driven Insights
Jay Jacobs
Sasha Romanosky
Octavian Suciuo
Benjamin Edwards
Armin Sarabi
11
17
0
27 Feb 2023
PAD: Towards Principled Adversarial Malware Detection Against Evasion
  Attacks
PAD: Towards Principled Adversarial Malware Detection Against Evasion Attacks
Deqiang Li
Shicheng Cui
Yun Li
Jia Xu
Fu Xiao
Shouhuai Xu
AAML
46
17
0
22 Feb 2023
Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models
Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models
Shawn Shan
Jenna Cryan
Emily Wenger
Haitao Zheng
Rana Hanocka
Ben Y. Zhao
WIGM
12
176
0
08 Feb 2023
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for
  Federated Learning
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning
Jianyi Zhang
Fangjiao Zhang
Qichao Jin
Zhiqiang Wang
Xiaodong Lin
X. Hei
AAML
FedML
32
0
0
28 Dec 2022
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Jimmy Z. Di
Jack Douglas
Jayadev Acharya
Gautam Kamath
Ayush Sekhari
MU
19
44
0
21 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning
  Attacks
Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning Attacks
Naoya Tezuka
H. Ochiai
Yuwei Sun
Hiroshi Esaki
AAML
26
4
0
07 Nov 2022
Emerging Threats in Deep Learning-Based Autonomous Driving: A
  Comprehensive Survey
Emerging Threats in Deep Learning-Based Autonomous Driving: A Comprehensive Survey
Huiyun Cao
Wenlong Zou
Yinkun Wang
Ting Song
Mengjun Liu
AAML
41
4
0
19 Oct 2022
Characterizing Internal Evasion Attacks in Federated Learning
Characterizing Internal Evasion Attacks in Federated Learning
Taejin Kim
Shubhranshu Singh
Nikhil Madaan
Carlee Joe-Wong
FedML
26
9
0
17 Sep 2022
Federated Learning based on Defending Against Data Poisoning Attacks in
  IoT
Federated Learning based on Defending Against Data Poisoning Attacks in IoT
Jiayin Li
Wenzhong Guo
Xingshuo Han
Jianping Cai
Ximeng Liu
AAML
69
1
0
14 Sep 2022
SNAP: Efficient Extraction of Private Properties with Poisoning
SNAP: Efficient Extraction of Private Properties with Poisoning
Harsh Chaudhari
John Abascal
Alina Oprea
Matthew Jagielski
Florian Tramèr
Jonathan R. Ullman
MIACV
34
30
0
25 Aug 2022
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications
Ali Raza
Shujun Li
K. Tran
L. Koehl
Kim Duc Tran
AAML
25
3
0
18 Jul 2022
SafeNet: The Unreasonable Effectiveness of Ensembles in Private
  Collaborative Learning
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning
Harsh Chaudhari
Matthew Jagielski
Alina Oprea
25
7
0
20 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
  Contrastive Learning
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
25
34
0
13 May 2022
VPN: Verification of Poisoning in Neural Networks
VPN: Verification of Poisoning in Neural Networks
Youcheng Sun
Muhammad Usman
D. Gopinath
C. Păsăreanu
AAML
23
2
0
08 May 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
27
106
0
31 Mar 2022
Robustly-reliable learners under poisoning attacks
Robustly-reliable learners under poisoning attacks
Maria-Florina Balcan
Avrim Blum
Steve Hanneke
Dravyansh Sharma
AAML
OOD
18
14
0
08 Mar 2022
Adversarial Patterns: Building Robust Android Malware Classifiers
Adversarial Patterns: Building Robust Android Malware Classifiers
Dipkamal Bhusal
Nidhi Rastogi
AAML
21
1
0
04 Mar 2022
Trusted AI in Multi-agent Systems: An Overview of Privacy and Security
  for Distributed Learning
Trusted AI in Multi-agent Systems: An Overview of Privacy and Security for Distributed Learning
Chuan Ma
Jun Li
Kang Wei
Bo Liu
Ming Ding
Long Yuan
Zhu Han
H. Vincent Poor
49
42
0
18 Feb 2022
StratDef: Strategic Defense Against Adversarial Attacks in ML-based
  Malware Detection
StratDef: Strategic Defense Against Adversarial Attacks in ML-based Malware Detection
Aqib Rashid
Jose Such
AAML
16
5
0
15 Feb 2022
Privacy protection based on mask template
Privacy protection based on mask template
Hao Wang
Yunkun Bai
Guangmin Sun
Jie Liu
PICV
9
0
0
13 Feb 2022
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Limin Yang
Zhi Chen
Jacopo Cortellazzi
Feargus Pendlebury
Kevin Tu
Fabio Pierazzi
Lorenzo Cavallaro
Gang Wang
AAML
18
35
0
11 Feb 2022
Redactor: A Data-centric and Individualized Defense Against Inference
  Attacks
Redactor: A Data-centric and Individualized Defense Against Inference Attacks
Geon Heo
Steven Euijong Whang
AAML
15
2
0
07 Feb 2022
123
Next