ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.00733
  4. Cited By
Generation-Distillation for Efficient Natural Language Understanding in
  Low-Data Settings

Generation-Distillation for Efficient Natural Language Understanding in Low-Data Settings

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019
25 January 2020
Luke Melas-Kyriazi
George Han
Celine Liang
ArXiv (abs)PDFHTML

Papers citing "Generation-Distillation for Efficient Natural Language Understanding in Low-Data Settings"

7 / 7 papers shown
Minimizing PLM-Based Few-Shot Intent Detectors
Minimizing PLM-Based Few-Shot Intent Detectors
Haode Zhang
Xiao-Ming Wu
Albert Y. S. Lam
VLM
330
0
0
13 Jul 2024
Self-Regulated Data-Free Knowledge Amalgamation for Text Classification
Self-Regulated Data-Free Knowledge Amalgamation for Text Classification
Prashanth Vijayaraghavan
Hongzhi Wang
Luyao Shi
Tyler Baldwin
David Beymer
Ehsan Degan
255
3
0
16 Jun 2024
DistillCSE: Distilled Contrastive Learning for Sentence Embeddings
DistillCSE: Distilled Contrastive Learning for Sentence Embeddings
Jiahao Xu
Wei Shao
Lihui Chen
Lemao Liu
FedML
400
9
0
20 Oct 2023
Recent Advances in Neural Text Generation: A Task-Agnostic Survey
Recent Advances in Neural Text Generation: A Task-Agnostic Survey
Chen Tang
Frank Guerin
Chenghua Lin
AI4CEOOD
446
20
0
06 Mar 2022
ReMeDi: Resources for Multi-domain, Multi-service, Medical Dialogues
ReMeDi: Resources for Multi-domain, Multi-service, Medical DialoguesAnnual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2021
Guojun Yan
Jiahuan Pei
Sudipta Singha Roy
Zhaochun Ren
Xin Xin
Huasheng Liang
Maarten de Rijke
Zhumin Chen
357
34
0
01 Sep 2021
Optimal Size-Performance Tradeoffs: Weighing PoS Tagger Models
Optimal Size-Performance Tradeoffs: Weighing PoS Tagger Models
Magnus Jacobsen
Mikkel H. Sorensen
Leon Derczynski
275
5
0
16 Apr 2021
Adversarial Self-Supervised Data-Free Distillation for Text
  Classification
Adversarial Self-Supervised Data-Free Distillation for Text ClassificationConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Xinyin Ma
Yongliang Shen
Gongfan Fang
Chen Chen
Chenghao Jia
Weiming Lu
340
27
0
10 Oct 2020
1
Page 1 of 1