Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2406.03409
Cited By
Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher Model
1 June 2024
Jinyin Chen
Xiaoming Zhao
Haibin Zheng
Xiao Li
Sheng Xiang
Haifeng Guo
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher Model"
1 / 1 papers shown
Title
Cascading Adversarial Bias from Injection to Distillation in Language Models
Harsh Chaudhari
Jamie Hayes
Matthew Jagielski
Ilia Shumailov
Milad Nasr
Alina Oprea
AAML
130
3
0
30 May 2025
1