ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.07523
  4. Cited By
Prompting to Distill: Boosting Data-Free Knowledge Distillation via
  Reinforced Prompt

Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt

16 May 2022
Xinyin Ma
Xinchao Wang
Gongfan Fang
Yongliang Shen
Weiming Lu
ArXivPDFHTML

Papers citing "Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt"

9 / 9 papers shown
Title
Self-Regulated Data-Free Knowledge Amalgamation for Text Classification
Self-Regulated Data-Free Knowledge Amalgamation for Text Classification
Prashanth Vijayaraghavan
Hongzhi Wang
Luyao Shi
Tyler Baldwin
David Beymer
Ehsan Degan
29
1
0
16 Jun 2024
Towards Trustworthy AI: A Review of Ethical and Robust Large Language
  Models
Towards Trustworthy AI: A Review of Ethical and Robust Large Language Models
Meftahul Ferdaus
Mahdi Abdelguerfi
Elias Ioup
Kendall N. Niles
Ken Pathak
Steve Sloan
34
10
0
01 Jun 2024
$\textit{Trans-LoRA}$: towards data-free Transferable Parameter
  Efficient Finetuning
Trans-LoRA\textit{Trans-LoRA}Trans-LoRA: towards data-free Transferable Parameter Efficient Finetuning
Runqian Wang
Soumya Ghosh
David D. Cox
Diego Antognini
Aude Oliva
Rogerio Feris
Leonid Karlinsky
32
1
0
27 May 2024
PromptKD: Distilling Student-Friendly Knowledge for Generative Language
  Models via Prompt Tuning
PromptKD: Distilling Student-Friendly Knowledge for Generative Language Models via Prompt Tuning
Gyeongman Kim
Doohyuk Jang
Eunho Yang
VLM
38
13
0
20 Feb 2024
IAG: Induction-Augmented Generation Framework for Answering Reasoning
  Questions
IAG: Induction-Augmented Generation Framework for Answering Reasoning Questions
Zhebin Zhang
Xinyu Zhang
Yuanhang Ren
Saijiang Shi
Meng Han
Yongkang Wu
Ruofei Lai
Zhao Cao
RALM
LRM
19
15
0
30 Nov 2023
Data-Free Distillation of Language Model by Text-to-Text Transfer
Data-Free Distillation of Language Model by Text-to-Text Transfer
Zheyuan Bai
Xinduo Liu
Hailin Hu
Tianyu Guo
Qinghua Zhang
Yunhe Wang
42
2
0
03 Nov 2023
Distilling Robustness into Natural Language Inference Models with
  Domain-Targeted Augmentation
Distilling Robustness into Natural Language Inference Models with Domain-Targeted Augmentation
Joe Stacey
Marek Rei
16
2
0
22 May 2023
LLM-Pruner: On the Structural Pruning of Large Language Models
LLM-Pruner: On the Structural Pruning of Large Language Models
Xinyin Ma
Gongfan Fang
Xinchao Wang
25
364
0
19 May 2023
Fake it till you make it: Learning transferable representations from
  synthetic ImageNet clones
Fake it till you make it: Learning transferable representations from synthetic ImageNet clones
Mert Bulent Sariyildiz
Alahari Karteek
Diane Larlus
Yannis Kalantidis
DiffM
VLM
30
153
0
16 Dec 2022
1