Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.06442
Cited By
Can Pretrained Language Models (Yet) Reason Deductively?
12 October 2022
Moy Yuan
Songbo Hu
Ivan Vulić
Anna Korhonen
Zaiqiao Meng
ReLM
ELM
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Can Pretrained Language Models (Yet) Reason Deductively?"
7 / 7 papers shown
Title
The Contribution of Knowledge in Visiolinguistic Learning: A Survey on Tasks and Challenges
Maria Lymperaiou
Giorgos Stamou
VLM
24
4
0
04 Mar 2023
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
291
4,048
0
24 May 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
315
8,402
0
28 Jan 2022
NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Kaustubh D. Dhole
Varun Gangal
Sebastian Gehrmann
Aadesh Gupta
Zhenhao Li
...
Tianbao Xie
Usama Yaseen
Michael A. Yee
Jing Zhang
Yue Zhang
169
86
0
06 Dec 2021
Prix-LM: Pretraining for Multilingual Knowledge Base Construction
Wenxuan Zhou
Fangyu Liu
Ivan Vulić
Nigel Collier
Muhao Chen
KELM
62
18
0
16 Oct 2021
Flexible Generation of Natural Language Deductions
Kaj Bostrom
Xinyu Zhao
Swarat Chaudhuri
Greg Durrett
ReLM
LRM
254
33
0
18 Apr 2021
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
406
2,576
0
03 Sep 2019
1