ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.06975
  4. Cited By
Technical Report: When Does Machine Learning FAIL? Generalized
  Transferability for Evasion and Poisoning Attacks

Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks

19 March 2018
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
    AAML
ArXivPDFHTML

Papers citing "Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks"

50 / 132 papers shown
Title
A Survey on Poisoning Attacks Against Supervised Machine Learning
A Survey on Poisoning Attacks Against Supervised Machine Learning
Wenjun Qiu
AAML
22
9
0
05 Feb 2022
How to Backdoor HyperNetwork in Personalized Federated Learning?
How to Backdoor HyperNetwork in Personalized Federated Learning?
Phung Lai
Nhathai Phan
Issa M. Khalil
Abdallah Khreishah
Xintao Wu
AAML
FedML
23
0
0
18 Jan 2022
Security for Machine Learning-based Software Systems: a survey of
  threats, practices and challenges
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges
Huaming Chen
Muhammad Ali Babar
AAML
29
21
0
12 Jan 2022
LoMar: A Local Defense Against Poisoning Attack on Federated Learning
LoMar: A Local Defense Against Poisoning Attack on Federated Learning
Xingyu Li
Zhe Qu
Shangqing Zhao
Bo Tang
Zhuo Lu
Yao-Hong Liu
AAML
33
92
0
08 Jan 2022
SoK: A Study of the Security on Voice Processing Systems
SoK: A Study of the Security on Voice Processing Systems
Robert Chang
Logan Kuo
Arthur Liu
Nader Sehatbakhsh
14
0
0
24 Dec 2021
Mate! Are You Really Aware? An Explainability-Guided Testing Framework
  for Robustness of Malware Detectors
Mate! Are You Really Aware? An Explainability-Guided Testing Framework for Robustness of Malware Detectors
Ruoxi Sun
Minhui Xue
Gareth Tyson
Tian Dong
Shaofeng Li
Shuo Wang
Haojin Zhu
S. Çamtepe
Surya Nepal
AAML
41
14
0
19 Nov 2021
An Overview of Backdoor Attacks Against Deep Neural Networks and
  Possible Defences
An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences
Wei Guo
B. Tondi
Mauro Barni
AAML
19
65
0
16 Nov 2021
Get a Model! Model Hijacking Attack Against Machine Learning Models
Get a Model! Model Hijacking Attack Against Machine Learning Models
A. Salem
Michael Backes
Yang Zhang
AAML
15
28
0
08 Nov 2021
10 Security and Privacy Problems in Large Foundation Models
10 Security and Privacy Problems in Large Foundation Models
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
11
7
0
28 Oct 2021
Towards Robust Reasoning over Knowledge Graphs
Towards Robust Reasoning over Knowledge Graphs
Zhaohan Xi
Ren Pang
Changjiang Li
S. Ji
Xiapu Luo
Xusheng Xiao
Ting Wang
15
0
0
27 Oct 2021
Ensemble Federated Adversarial Training with Non-IID data
Ensemble Federated Adversarial Training with Non-IID data
Shuang Luo
Didi Zhu
Zexi Li
Chao-Xiang Wu
FedML
22
7
0
26 Oct 2021
Demystifying the Transferability of Adversarial Attacks in Computer
  Networks
Demystifying the Transferability of Adversarial Attacks in Computer Networks
Ehsan Nowroozi
Yassine Mekdad
Mohammad Hajian Berenjestanaki
Mauro Conti
Abdeslam El Fergougui
AAML
31
32
0
09 Oct 2021
Adversarial Transfer Attacks With Unknown Data and Class Overlap
Adversarial Transfer Attacks With Unknown Data and Class Overlap
Luke E. Richards
A. Nguyen
Ryan Capps
Steven D. Forsythe
Cynthia Matuszek
Edward Raff
AAML
33
7
0
23 Sep 2021
SoK: Machine Learning Governance
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
30
16
0
20 Sep 2021
Avengers Ensemble! Improving Transferability of Authorship Obfuscation
Avengers Ensemble! Improving Transferability of Authorship Obfuscation
Muhammad Haroon
Muhammad Fareed Zaffar
P. Srinivasan
Zubair Shafiq
AAML
14
8
0
15 Sep 2021
Accumulative Poisoning Attacks on Real-time Data
Accumulative Poisoning Attacks on Real-time Data
Tianyu Pang
Xiao Yang
Yinpeng Dong
Hang Su
Jun Zhu
24
20
0
18 Jun 2021
Non-Transferable Learning: A New Approach for Model Ownership
  Verification and Applicability Authorization
Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Authorization
Lixu Wang
Shichao Xu
Ruiqi Xu
Xiao Wang
Qi Zhu
AAML
11
45
0
13 Jun 2021
Topological Detection of Trojaned Neural Networks
Topological Detection of Trojaned Neural Networks
Songzhu Zheng
Yikai Zhang
H. Wagner
Mayank Goswami
Chao Chen
AAML
24
40
0
11 Jun 2021
Gradient-based Data Subversion Attack Against Binary Classifiers
Gradient-based Data Subversion Attack Against Binary Classifiers
Rosni Vasu
Sanjay Seetharaman
Shubham Malaviya
Manish Shukla
S. Lodha
AAML
11
0
0
31 May 2021
Broadly Applicable Targeted Data Sample Omission Attacks
Broadly Applicable Targeted Data Sample Omission Attacks
Guy Barash
E. Farchi
Sarit Kraus
Onn Shehory
AAML
8
0
0
04 May 2021
Turning Federated Learning Systems Into Covert Channels
Turning Federated Learning Systems Into Covert Channels
Gabriele Costa
Fabio Pinelli
S. Soderi
Gabriele Tolomei
FedML
37
10
0
21 Apr 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
X. Zhang
AAML
22
8
0
16 Mar 2021
Robust learning under clean-label attack
Robust learning under clean-label attack
Avrim Blum
Steve Hanneke
Jian Qian
Han Shao
OOD
14
8
0
01 Mar 2021
Oriole: Thwarting Privacy against Trustworthy Deep Learning Models
Oriole: Thwarting Privacy against Trustworthy Deep Learning Models
Liuqiao Chen
Hu Wang
Benjamin Zi Hao Zhao
Minhui Xue
Hai-feng Qian
PICV
11
4
0
23 Feb 2021
Security and Privacy for Artificial Intelligence: Opportunities and
  Challenges
Security and Privacy for Artificial Intelligence: Opportunities and Challenges
Ayodeji Oseni
Nour Moustafa
Helge Janicke
Peng Liu
Z. Tari
A. Vasilakos
AAML
34
48
0
09 Feb 2021
A Real-time Defense against Website Fingerprinting Attacks
A Real-time Defense against Website Fingerprinting Attacks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
14
19
0
08 Feb 2021
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
Xiaoyu Cao
Minghong Fang
Jia Liu
Neil Zhenqiang Gong
FedML
108
611
0
27 Dec 2020
Poisoning Attacks on Cyber Attack Detectors for Industrial Control
  Systems
Poisoning Attacks on Cyber Attack Detectors for Industrial Control Systems
Moshe Kravchik
Battista Biggio
A. Shabtai
AAML
11
28
0
23 Dec 2020
The Translucent Patch: A Physical and Universal Attack on Object
  Detectors
The Translucent Patch: A Physical and Universal Attack on Object Detectors
Alon Zolfi
Moshe Kravchik
Yuval Elovici
A. Shabtai
AAML
18
88
0
23 Dec 2020
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural
  Backdoors
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors
Ren Pang
Zheng-Wei Zhang
Xiangshan Gao
Zhaohan Xi
S. Ji
Peng Cheng
Xiapu Luo
Ting Wang
AAML
27
31
0
16 Dec 2020
Certified Robustness of Nearest Neighbors against Data Poisoning and
  Backdoor Attacks
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks
Jinyuan Jia
Yupei Liu
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
21
72
0
07 Dec 2020
BaFFLe: Backdoor detection via Feedback-based Federated Learning
BaFFLe: Backdoor detection via Feedback-based Federated Learning
Sébastien Andreina
G. Marson
Helen Möllering
Ghassan O. Karame
FedML
29
137
0
04 Nov 2020
Blockchain based Attack Detection on Machine Learning Algorithms for IoT
  based E-Health Applications
Blockchain based Attack Detection on Machine Learning Algorithms for IoT based E-Health Applications
Thippa Reddy Gadekallu
Manoj M K
Sivarama Krishnan S
Neeraj Kumar
S. Hakak
S. Bhattacharya
OOD
22
54
0
03 Nov 2020
Being Single Has Benefits. Instance Poisoning to Deceive Malware
  Classifiers
Being Single Has Benefits. Instance Poisoning to Deceive Malware Classifiers
T. Shapira
David Berend
Ishai Rosenberg
Yang Liu
A. Shabtai
Yuval Elovici
AAML
13
4
0
30 Oct 2020
Robust and Verifiable Information Embedding Attacks to Deep Neural
  Networks via Error-Correcting Codes
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
24
5
0
26 Oct 2020
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine
  Learning Models
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models
A. Salem
Yannick Sautter
Michael Backes
Mathias Humbert
Yang Zhang
AAML
SILM
AI4CE
17
38
0
06 Oct 2020
Pocket Diagnosis: Secure Federated Learning against Poisoning Attack in
  the Cloud
Pocket Diagnosis: Secure Federated Learning against Poisoning Attack in the Cloud
Zhuo Ma
Jianfeng Ma
Yinbin Miao
Ximeng Liu
K. Choo
R. Deng
FedML
12
32
0
23 Sep 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
W. R. Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
19
215
0
04 Sep 2020
Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
  Attacks on Machine Learning for Windows Malware Detection
Adversarial EXEmples: A Survey and Experimental Evaluation of Practical Attacks on Machine Learning for Windows Malware Detection
Luca Demetrio
Scott E. Coull
Battista Biggio
Giovanni Lagorio
A. Armando
Fabio Roli
AAML
17
59
0
17 Aug 2020
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks
Jinyuan Jia
Xiaoyu Cao
Neil Zhenqiang Gong
SILM
8
128
0
11 Aug 2020
Trojaning Language Models for Fun and Profit
Trojaning Language Models for Fun and Profit
Xinyang Zhang
Zheng-Wei Zhang
Shouling Ji
Ting Wang
SILM
AAML
9
131
0
01 Aug 2020
The Price of Tailoring the Index to Your Data: Poisoning Attacks on
  Learned Index Structures
The Price of Tailoring the Index to Your Data: Poisoning Attacks on Learned Index Structures
Evgenios M. Kornaropoulos
Silei Ren
R. Tamassia
AAML
11
17
0
01 Aug 2020
Data Poisoning Attacks Against Federated Learning Systems
Data Poisoning Attacks Against Federated Learning Systems
Vale Tolpegin
Stacey Truex
Mehmet Emre Gursoy
Ling Liu
FedML
23
637
0
16 Jul 2020
Adversarial Machine Learning Attacks and Defense Methods in the Cyber
  Security Domain
Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
Ishai Rosenberg
A. Shabtai
Yuval Elovici
L. Rokach
AAML
26
12
0
05 Jul 2020
Subpopulation Data Poisoning Attacks
Subpopulation Data Poisoning Attacks
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
AAML
SILM
16
112
0
24 Jun 2020
Graph Backdoor
Graph Backdoor
Zhaohan Xi
Ren Pang
S. Ji
Ting Wang
AI4CE
AAML
12
163
0
21 Jun 2020
On Adversarial Bias and the Robustness of Fair Machine Learning
On Adversarial Bias and the Robustness of Fair Machine Learning
Hong Chang
Ta Duy Nguyen
S. K. Murakonda
Ehsan Kazemi
Reza Shokri
FaML
OOD
FedML
6
50
0
15 Jun 2020
Arms Race in Adversarial Malware Detection: A Survey
Arms Race in Adversarial Malware Detection: A Survey
Deqiang Li
Qianmu Li
Yanfang Ye
Shouhuai Xu
AAML
8
52
0
24 May 2020
VerifyTL: Secure and Verifiable Collaborative Transfer Learning
VerifyTL: Secure and Verifiable Collaborative Transfer Learning
Zhuo Ma
Jianfeng Ma
Yinbin Miao
Ximeng Liu
Wei Zheng
K. Choo
R. Deng
AAML
11
3
0
18 May 2020
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved
  Transferability
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability
H. Aghakhani
Dongyu Meng
Yu-Xiang Wang
Christopher Kruegel
Giovanni Vigna
AAML
15
105
0
01 May 2020
Previous
123
Next