Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2206.10265
Cited By
KnowDA: All-in-One Knowledge Mixture Model for Data Augmentation in Low-Resource NLP
21 June 2022
Yufei Wang
Jiayi Zheng
Can Xu
Xiubo Geng
Tao Shen
Chongyang Tao
Daxin Jiang
VLM
MoE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"KnowDA: All-in-One Knowledge Mixture Model for Data Augmentation in Low-Resource NLP"
5 / 5 papers shown
Title
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
213
1,656
0
15 Oct 2021
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP
Qinyuan Ye
Bill Yuchen Lin
Xiang Ren
209
179
0
18 Apr 2021
Explainable Automated Fact-Checking for Public Health Claims
Neema Kotonya
Francesca Toni
216
249
0
19 Oct 2020
Data Augmentation using Pre-trained Transformer Models
Varun Kumar
Ashutosh Choudhary
Eunah Cho
VLM
214
315
0
04 Mar 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,950
0
20 Apr 2018
1