ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.03409
  4. Cited By
Robust Knowledge Distillation Based on Feature Variance Against
  Backdoored Teacher Model

Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher Model

1 June 2024
Jinyin Chen
Xiaoming Zhao
Haibin Zheng
Xiao Li
Sheng Xiang
Haifeng Guo
    AAML
ArXiv (abs)PDFHTML

Papers citing "Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher Model"

1 / 1 papers shown
Title
Cascading Adversarial Bias from Injection to Distillation in Language Models
Cascading Adversarial Bias from Injection to Distillation in Language Models
Harsh Chaudhari
Jamie Hayes
Matthew Jagielski
Ilia Shumailov
Milad Nasr
Alina Oprea
AAML
130
3
0
30 May 2025
1