ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.15127
  4. Cited By
Diverse Knowledge Distillation (DKD): A Solution for Improving The
  Robustness of Ensemble Models Against Adversarial Attacks

Diverse Knowledge Distillation (DKD): A Solution for Improving The Robustness of Ensemble Models Against Adversarial Attacks

26 June 2020
Ali Mirzaeian
Jana Kosecka
Houman Homayoun
Tinoosh Mohsening
Avesta Sasan
    FedML
    AAML
ArXivPDFHTML

Papers citing "Diverse Knowledge Distillation (DKD): A Solution for Improving The Robustness of Ensemble Models Against Adversarial Attacks"

2 / 2 papers shown
Title
On the Convergence and Robustness of Adversarial Training
On the Convergence and Robustness of Adversarial Training
Yisen Wang
Xingjun Ma
James Bailey
Jinfeng Yi
Bowen Zhou
Quanquan Gu
AAML
194
345
0
15 Dec 2021
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
261
3,109
0
04 Nov 2016
1