ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.04883
  4. Cited By
Adversarial Self-Supervised Data-Free Distillation for Text
  Classification

Adversarial Self-Supervised Data-Free Distillation for Text Classification

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020
10 October 2020
Xinyin Ma
Yongliang Shen
Gongfan Fang
Chen Chen
Chenghao Jia
Weiming Lu
ArXiv (abs)PDFHTML

Papers citing "Adversarial Self-Supervised Data-Free Distillation for Text Classification"

13 / 13 papers shown
Prompt Tuning for Few-Shot Continual Learning Named Entity Recognition
Prompt Tuning for Few-Shot Continual Learning Named Entity Recognition
Zhe Ren
CLL
86
0
0
10 Aug 2025
Encapsulating Knowledge in One Prompt
Encapsulating Knowledge in One Prompt
Qi Li
Runpeng Yu
Xinchao Wang
VLMKELM
208
3
0
16 Jul 2024
Self-Regulated Data-Free Knowledge Amalgamation for Text Classification
Self-Regulated Data-Free Knowledge Amalgamation for Text Classification
Prashanth Vijayaraghavan
Hongzhi Wang
Luyao Shi
Tyler Baldwin
David Beymer
Ehsan Degan
226
3
0
16 Jun 2024
Data-Free Distillation of Language Model by Text-to-Text Transfer
Data-Free Distillation of Language Model by Text-to-Text Transfer
Zheyuan Bai
Xinduo Liu
Hailin Hu
Tianyu Guo
Qinghua Zhang
Yunhe Wang
174
3
0
03 Nov 2023
LLM-Pruner: On the Structural Pruning of Large Language Models
LLM-Pruner: On the Structural Pruning of Large Language ModelsNeural Information Processing Systems (NeurIPS), 2023
Xinyin Ma
Gongfan Fang
Xinchao Wang
631
671
0
19 May 2023
When Gradient Descent Meets Derivative-Free Optimization: A Match Made
  in Black-Box Scenario
When Gradient Descent Meets Derivative-Free Optimization: A Match Made in Black-Box ScenarioAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Chengcheng Han
Liqing Cui
Renyu Zhu
Jiadong Wang
Polydoros Giannouris
Qiushi Sun
Xiang Li
Ming Gao
207
9
0
17 May 2023
Feature-Rich Audio Model Inversion for Data-Free Knowledge Distillation
  Towards General Sound Classification
Feature-Rich Audio Model Inversion for Data-Free Knowledge Distillation Towards General Sound ClassificationIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023
Zuheng Kang
Yayun He
Jianzong Wang
Junqing Peng
Xiaoyang Qu
Jing Xiao
123
3
0
14 Mar 2023
Prompting to Distill: Boosting Data-Free Knowledge Distillation via
  Reinforced Prompt
Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced PromptInternational Joint Conference on Artificial Intelligence (IJCAI), 2022
Xinyin Ma
Xinchao Wang
Gongfan Fang
Yongliang Shen
Weiming Lu
96
13
0
16 May 2022
Robust and Resource-Efficient Data-Free Knowledge Distillation by
  Generative Pseudo Replay
Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo ReplayAAAI Conference on Artificial Intelligence (AAAI), 2022
Kuluhan Binici
Shivam Aggarwal
N. Pham
K. Leman
T. Mitra
TTA
280
58
0
09 Jan 2022
Data-Free Knowledge Transfer: A Survey
Data-Free Knowledge Transfer: A Survey
Yuang Liu
Wei Zhang
Jun Wang
Jianyong Wang
296
55
0
31 Dec 2021
Up to 100$\times$ Faster Data-free Knowledge Distillation
Up to 100×\times× Faster Data-free Knowledge Distillation
Gongfan Fang
Kanya Mo
Xinchao Wang
Mingli Song
Shitao Bei
Haofei Zhang
Xiuming Zhang
DD
210
4
0
12 Dec 2021
Can depth-adaptive BERT perform better on binary classification tasks
Can depth-adaptive BERT perform better on binary classification tasks
Jing Fan
Xin Zhang
Sheng Zhang
Yan Pan
Lixiang Guo
MQ
183
0
0
22 Nov 2021
Contrastive Model Inversion for Data-Free Knowledge Distillation
Contrastive Model Inversion for Data-Free Knowledge Distillation
Gongfan Fang
Mingli Song
Xinchao Wang
Chen Shen
Xingen Wang
Xiuming Zhang
224
99
0
18 May 2021
1
Page 1 of 1