ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.05646
  4. Cited By
Adversarial Machine Learning -- Industry Perspectives
v1v2v3 (latest)

Adversarial Machine Learning -- Industry Perspectives

4 February 2020
Ramnath Kumar
Magnus Nyström
J. Lambert
Andrew Marshall
Mario Goertzel
Andi Comissoneru
Matt Swann
Sharon Xia
    AAMLSILM
ArXiv (abs)PDFHTML

Papers citing "Adversarial Machine Learning -- Industry Perspectives"

29 / 129 papers shown
Balancing detectability and performance of attacks on the control
  channel of Markov Decision Processes
Balancing detectability and performance of attacks on the control channel of Markov Decision Processes
Alessio Russo
Alexandre Proutiere
AAML
182
8
0
15 Sep 2021
Understanding the Logit Distributions of Adversarially-Trained Deep
  Neural Networks
Understanding the Logit Distributions of Adversarially-Trained Deep Neural Networks
Landan Seguin
A. Ndirango
Neeli Mishra
SueYeon Chung
Tyler Lee
OOD
127
2
0
26 Aug 2021
Evaluating the Cybersecurity Risk of Real World, Machine Learning
  Production Systems
Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems
Ron Bitton
Nadav Maman
Inderjeet Singh
Satoru Momiyama
Yuval Elovici
A. Shabtai
194
26
0
05 Jul 2021
Accumulative Poisoning Attacks on Real-time Data
Accumulative Poisoning Attacks on Real-time DataNeural Information Processing Systems (NeurIPS), 2021
Tianyu Pang
Xiao Yang
Yinpeng Dong
Hang Su
Jun Zhu
232
22
0
18 Jun 2021
Modeling Realistic Adversarial Attacks against Network Intrusion
  Detection Systems
Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems
Giovanni Apruzzese
M. Andreolini
Luca Ferretti
Mirco Marchetti
M. Colajanni
AAML
277
134
0
17 Jun 2021
Reliable Adversarial Distillation with Unreliable Teachers
Reliable Adversarial Distillation with Unreliable TeachersInternational Conference on Learning Representations (ICLR), 2021
Jianing Zhu
Jiangchao Yao
Bo Han
Jingfeng Zhang
Tongliang Liu
Gang Niu
Jingren Zhou
Jianliang Xu
Hongxia Yang
AAML
241
84
0
09 Jun 2021
The Duo of Artificial Intelligence and Big Data for Industry 4.0: Review
  of Applications, Techniques, Challenges, and Future Research Directions
The Duo of Artificial Intelligence and Big Data for Industry 4.0: Review of Applications, Techniques, Challenges, and Future Research Directions
Senthil Kumar Jagatheesaperumal
Mohamed Rahouti
Kashif Ahmad
Ala I. Al-Fuqaha
Mohsen Guizani
AI4CE
180
24
0
06 Apr 2021
PointBA: Towards Backdoor Attacks in 3D Point Cloud
PointBA: Towards Backdoor Attacks in 3D Point CloudIEEE International Conference on Computer Vision (ICCV), 2021
Xinke Li
Zhirui Chen
Yue Zhao
Zekun Tong
Yabang Zhao
A. Lim
Qiufeng Wang
3DPCAAML
577
61
0
30 Mar 2021
Black-box Detection of Backdoor Attacks with Limited Information and
  Data
Black-box Detection of Backdoor Attacks with Limited Information and DataIEEE International Conference on Computer Vision (ICCV), 2021
Yinpeng Dong
Xiao Yang
Zhijie Deng
Tianyu Pang
Zihao Xiao
Hang Su
Jun Zhu
AAML
159
124
0
24 Mar 2021
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison
  Linear Classifiers?
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?IEEE International Joint Conference on Neural Network (IJCNN), 2021
Antonio Emanuele Cinà
Sebastiano Vascon
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
164
13
0
23 Mar 2021
What Doesn't Kill You Makes You Robust(er): How to Adversarially Train
  against Data Poisoning
What Doesn't Kill You Makes You Robust(er): How to Adversarially Train against Data Poisoning
Jonas Geiping
Liam H. Fowl
Gowthami Somepalli
Micah Goldblum
Michael Moeller
Tom Goldstein
TDIAAMLSILM
190
46
0
26 Feb 2021
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial
  Training
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial TrainingNeural Information Processing Systems (NeurIPS), 2021
Lue Tao
Lei Feng
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
468
82
0
09 Feb 2021
Adversarial Machine Learning Attacks on Condition-Based Maintenance
  Capabilities
Adversarial Machine Learning Attacks on Condition-Based Maintenance Capabilities
Hamidreza Habibollahi Najaf Abadi
AAML
86
3
0
28 Jan 2021
On managing vulnerabilities in AI/ML systems
On managing vulnerabilities in AI/ML systemsNew Security Paradigms Workshop (NSPW), 2020
Jonathan M. Spring
April Galyardt
A. Householder
Nathan M. VanHoudnos
115
22
0
22 Jan 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and DefensesIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
Basel Alomair
Aleksander Madry
Yue Liu
Tom Goldstein
SILM
483
349
0
18 Dec 2020
TrollHunter [Evader]: Automated Detection [Evasion] of Twitter Trolls
  During the COVID-19 Pandemic
TrollHunter [Evader]: Automated Detection [Evasion] of Twitter Trolls During the COVID-19 PandemicNew Security Paradigms Workshop (NSPW), 2020
Peter Jachim
Filipo Sharevski
Paige Treebridge
290
29
0
04 Dec 2020
Effect of backdoor attacks over the complexity of the latent space
  distribution
Effect of backdoor attacks over the complexity of the latent space distribution
Henry Chacón
P. Rad
AAML
134
1
0
29 Nov 2020
Transdisciplinary AI Observatory -- Retrospective Analyses and
  Future-Oriented Contradistinctions
Transdisciplinary AI Observatory -- Retrospective Analyses and Future-Oriented Contradistinctions
Nadisha-Marie Aliman
L. Kester
Roman V. Yampolskiy
261
15
0
26 Nov 2020
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks
  Without an Accuracy Tradeoff
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy TradeoffIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020
Eitan Borgnia
Valeriia Cherepanova
Liam H. Fowl
Amin Ghiasi
Jonas Geiping
Micah Goldblum
Tom Goldstein
Arjun Gupta
AAML
216
141
0
18 Nov 2020
Challenges in Deploying Machine Learning: a Survey of Case Studies
Challenges in Deploying Machine Learning: a Survey of Case StudiesACM Computing Surveys (ACM CSUR), 2020
Andrei Paleyes
Raoul-Gabriel Urma
Neil D. Lawrence
358
501
0
18 Nov 2020
Concealed Data Poisoning Attacks on NLP Models
Concealed Data Poisoning Attacks on NLP Models
Eric Wallace
Tony Zhao
Shi Feng
Sameer Singh
SILM
186
20
0
23 Oct 2020
VenoMave: Targeted Poisoning Against Speech Recognition
VenoMave: Targeted Poisoning Against Speech Recognition
H. Aghakhani
Lea Schonherr
Thorsten Eisenhofer
D. Kolossa
Thorsten Holz
Christopher Kruegel
Giovanni Vigna
AAML
289
20
0
21 Oct 2020
Malicious Network Traffic Detection via Deep Learning: An Information
  Theoretic View
Malicious Network Traffic Detection via Deep Learning: An Information Theoretic View
Erick Galinkin
AAML
133
0
0
16 Sep 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Witches' Brew: Industrial Scale Data Poisoning via Gradient MatchingInternational Conference on Learning Representations (ICLR), 2020
Jonas Geiping
Liam H. Fowl
Wenjie Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
327
253
0
04 Sep 2020
Green Lighting ML: Confidentiality, Integrity, and Availability of
  Machine Learning Systems in Deployment
Green Lighting ML: Confidentiality, Integrity, and Availability of Machine Learning Systems in Deployment
Abhishek Gupta
Erick Galinkin
88
4
0
09 Jul 2020
Backdoor Attacks Against Deep Learning Systems in the Physical World
Backdoor Attacks Against Deep Learning Systems in the Physical World
Emily Wenger
Josephine Passananti
A. Bhagoji
Yuanshun Yao
Haitao Zheng
Ben Y. Zhao
AAML
400
240
0
25 Jun 2020
Subpopulation Data Poisoning Attacks
Subpopulation Data Poisoning AttacksConference on Computer and Communications Security (CCS), 2020
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
AAMLSILM
244
138
0
24 Jun 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
  Data Poisoning Attacks
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning AttacksInternational Conference on Machine Learning (ICML), 2020
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAMLTDI
309
190
0
22 Jun 2020
A Separation Result Between Data-oblivious and Data-aware Poisoning
  Attacks
A Separation Result Between Data-oblivious and Data-aware Poisoning AttacksNeural Information Processing Systems (NeurIPS), 2020
Samuel Deng
Sanjam Garg
S. Jha
Saeed Mahloujifar
Mohammad Mahmoody
Abhradeep Thakurta
176
3
0
26 Mar 2020
Previous
123