Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.15127
Cited By
Diverse Knowledge Distillation (DKD): A Solution for Improving The Robustness of Ensemble Models Against Adversarial Attacks
26 June 2020
Ali Mirzaeian
Jana Kosecka
Houman Homayoun
Tinoosh Mohsening
Avesta Sasan
FedML
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Diverse Knowledge Distillation (DKD): A Solution for Improving The Robustness of Ensemble Models Against Adversarial Attacks"
2 / 2 papers shown
Title
On the Convergence and Robustness of Adversarial Training
Yisen Wang
Xingjun Ma
James Bailey
Jinfeng Yi
Bowen Zhou
Quanquan Gu
AAML
194
345
0
15 Dec 2021
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
261
3,109
0
04 Nov 2016
1