Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.07178
Cited By
Symbolic Knowledge Distillation: from General Language Models to Commonsense Models
14 October 2021
Peter West
Chandrasekhar Bhagavatula
Jack Hessel
Jena D. Hwang
Liwei Jiang
Ronan Le Bras
Ximing Lu
Sean Welleck
Yejin Choi
SyDa
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Symbolic Knowledge Distillation: from General Language Models to Commonsense Models"
9 / 59 papers shown
Title
WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation
Alisa Liu
Swabha Swayamdipta
Noah A. Smith
Yejin Choi
30
212
0
16 Jan 2022
Reframing Human-AI Collaboration for Generating Free-Text Explanations
Sarah Wiegreffe
Jack Hessel
Swabha Swayamdipta
Mark O. Riedl
Yejin Choi
16
142
0
16 Dec 2021
Refined Commonsense Knowledge from Large-Scale Web Contents
Tuan-Phong Nguyen
Simon Razniewski
Julien Romero
G. Weikum
20
32
0
30 Nov 2021
Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs
Peter Hase
Mona T. Diab
Asli Celikyilmaz
Xian Li
Zornitsa Kozareva
Veselin Stoyanov
Mohit Bansal
Srini Iyer
KELM
LRM
17
79
0
26 Nov 2021
Guided Generation of Cause and Effect
Zhongyang Li
Xiao Ding
Ting Liu
J. E. Hu
Benjamin Van Durme
155
78
0
21 Jul 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,835
0
18 Apr 2021
Data Augmentation using Pre-trained Transformer Models
Varun Kumar
Ashutosh Choudhary
Eunah Cho
VLM
209
315
0
04 Mar 2020
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
406
2,576
0
03 Sep 2019
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets
Mor Geva
Yoav Goldberg
Jonathan Berant
235
319
0
21 Aug 2019
Previous
1
2