Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2112.03695
Cited By
Safe Distillation Box
5 December 2021
Jingwen Ye
Yining Mao
Jie Song
Xinchao Wang
Cheng Jin
Mingli Song
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Safe Distillation Box"
10 / 10 papers shown
Title
AI Safety in Generative AI Large Language Models: A Survey
Jaymari Chua
Yun Yvonna Li
Shiyi Yang
Chen Wang
Lina Yao
LM&MA
34
12
0
06 Jul 2024
Ungeneralizable Examples
Jing Ye
Xinchao Wang
33
4
0
22 Apr 2024
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey
Yichen Wan
Youyang Qu
Wei Ni
Yong Xiang
Longxiang Gao
Ekram Hossain
AAML
45
33
0
14 Dec 2023
Deep Graph Reprogramming
Yongcheng Jing
Chongbin Yuan
Li Ju
Yiding Yang
Xinchao Wang
Dacheng Tao
75
35
0
28 Apr 2023
Partial Network Cloning
Jingwen Ye
Songhua Liu
Xinchao Wang
CLL
22
14
0
19 Mar 2023
Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge Distillation
Gaurav Patel
Konda Reddy Mopuri
Qiang Qiu
21
28
0
28 Feb 2023
TinyMIM: An Empirical Study of Distilling MIM Pre-trained Models
Sucheng Ren
Fangyun Wei
Zheng-Wei Zhang
Han Hu
35
34
0
03 Jan 2023
Learning with Recoverable Forgetting
Jingwen Ye
Yifang Fu
Jie Song
Xingyi Yang
Songhua Liu
Xin Jin
Mingli Song
Xinchao Wang
CLL
KELM
22
40
0
17 Jul 2022
Distilling Knowledge from Graph Convolutional Networks
Yiding Yang
Jiayan Qiu
Mingli Song
Dacheng Tao
Xinchao Wang
146
226
0
23 Mar 2020
Clean-Label Backdoor Attacks on Video Recognition Models
Shihao Zhao
Xingjun Ma
Xiang Zheng
James Bailey
Jingjing Chen
Yu-Gang Jiang
AAML
185
252
0
06 Mar 2020
1