ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.05867
  4. Cited By
Transformers as Soft Reasoners over Language

Transformers as Soft Reasoners over Language

14 February 2020
Peter Clark
Oyvind Tafjord
Kyle Richardson
    ReLM
    OffRL
    LRM
ArXivPDFHTML

Papers citing "Transformers as Soft Reasoners over Language"

30 / 80 papers shown
Title
Language models show human-like content effects on reasoning tasks
Language models show human-like content effects on reasoning tasks
Ishita Dasgupta
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Hannah R. Sheahan
Antonia Creswell
D. Kumaran
James L. McClelland
Felix Hill
ReLM
LRM
30
181
0
14 Jul 2022
Exploring Length Generalization in Large Language Models
Exploring Length Generalization in Large Language Models
Cem Anil
Yuhuai Wu
Anders Andreassen
Aitor Lewkowycz
Vedant Misra
V. Ramasesh
Ambrose Slone
Guy Gur-Ari
Ethan Dyer
Behnam Neyshabur
ReLM
LRM
33
158
0
11 Jul 2022
Chain of Thought Imitation with Procedure Cloning
Chain of Thought Imitation with Procedure Cloning
Mengjiao Yang
Dale Schuurmans
Pieter Abbeel
Ofir Nachum
OffRL
30
29
0
22 May 2022
Life after BERT: What do Other Muppets Understand about Language?
Life after BERT: What do Other Muppets Understand about Language?
Vladislav Lialin
Kevin Zhao
Namrata Shivagunde
Anna Rumshisky
41
6
0
21 May 2022
Feature Aggregation in Zero-Shot Cross-Lingual Transfer Using
  Multilingual BERT
Feature Aggregation in Zero-Shot Cross-Lingual Transfer Using Multilingual BERT
Beiduo Chen
Wu Guo
Quan Liu
Kun Tao
35
1
0
17 May 2022
METGEN: A Module-Based Entailment Tree Generation Framework for Answer
  Explanation
METGEN: A Module-Based Entailment Tree Generation Framework for Answer Explanation
Ruixin Hong
Hongming Zhang
Xintong Yu
Changshui Zhang
ReLM
LRM
32
32
0
05 May 2022
Logically Consistent Adversarial Attacks for Soft Theorem Provers
Logically Consistent Adversarial Attacks for Soft Theorem Provers
Alexander Gaskell
Yishu Miao
Lucia Specia
Francesca Toni
AAML
16
7
0
29 Apr 2022
FaiRR: Faithful and Robust Deductive Reasoning over Natural Language
FaiRR: Faithful and Robust Deductive Reasoning over Natural Language
Soumya Sanyal
Harman Singh
Xiang Ren
ReLM
LRM
24
44
0
19 Mar 2022
E-KAR: A Benchmark for Rationalizing Natural Language Analogical
  Reasoning
E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning
Jiangjie Chen
Rui Xu
Ziquan Fu
Wei Shi
Zhongqiao Li
Xinbo Zhang
Changzhi Sun
Lei Li
Yanghua Xiao
Hao Zhou
ELM
23
35
0
16 Mar 2022
Do Transformers know symbolic rules, and would we know if they did?
Do Transformers know symbolic rules, and would we know if they did?
Tommi Gröndahl
Yu-Wen Guo
Nirmal Asokan
25
0
0
19 Feb 2022
Does Entity Abstraction Help Generative Transformers Reason?
Does Entity Abstraction Help Generative Transformers Reason?
Nicolas Angelard-Gontier
Siva Reddy
C. Pal
31
5
0
05 Jan 2022
Pushing the Limits of Rule Reasoning in Transformers through Natural
  Language Satisfiability
Pushing the Limits of Rule Reasoning in Transformers through Natural Language Satisfiability
Kyle Richardson
Ashish Sabharwal
ReLM
LRM
27
24
0
16 Dec 2021
Dyna-bAbI: unlocking bAbI's potential with dynamic synthetic
  benchmarking
Dyna-bAbI: unlocking bAbI's potential with dynamic synthetic benchmarking
Ronen Tamari
Kyle Richardson
Aviad Sar-Shalom
Noam Kahlon
Nelson F. Liu
Reut Tsarfaty
Dafna Shahaf
40
5
0
30 Nov 2021
Hey AI, Can You Solve Complex Tasks by Talking to Agents?
Hey AI, Can You Solve Complex Tasks by Talking to Agents?
Tushar Khot
Kyle Richardson
Daniel Khashabi
Ashish Sabharwal
RALM
LRM
13
14
0
16 Oct 2021
DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained
  Neural Text2Text Language Models
DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models
Gregor Betz
Kyle Richardson
22
8
0
04 Oct 2021
BeliefBank: Adding Memory to a Pre-Trained Language Model for a
  Systematic Notion of Belief
BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief
Nora Kassner
Oyvind Tafjord
Hinrich Schütze
Peter Clark
KELM
LRM
245
64
0
29 Sep 2021
Neural Unification for Logic Reasoning over Natural Language
Neural Unification for Logic Reasoning over Natural Language
Gabriele Picco
Hoang Thanh Lam
M. Sbodio
Vanessa Lopez Garcia
NAI
LRM
23
13
0
17 Sep 2021
On the Challenges of Evaluating Compositional Explanations in Multi-Hop
  Inference: Relevance, Completeness, and Expert Ratings
On the Challenges of Evaluating Compositional Explanations in Multi-Hop Inference: Relevance, Completeness, and Expert Ratings
Peter Alexander Jansen
Kelly Smith
Dan Moreno
Huitzilin Ortiz
CoGe
ReLM
LRM
25
10
0
07 Sep 2021
Thinking Like Transformers
Thinking Like Transformers
Gail Weiss
Yoav Goldberg
Eran Yahav
AI4CE
35
127
0
13 Jun 2021
multiPRover: Generating Multiple Proofs for Improved Interpretability in
  Rule Reasoning
multiPRover: Generating Multiple Proofs for Improved Interpretability in Rule Reasoning
Swarnadeep Saha
Prateek Yadav
Joey Tianyi Zhou
ReLM
LRM
18
26
0
02 Jun 2021
Relational World Knowledge Representation in Contextual Language Models:
  A Review
Relational World Knowledge Representation in Contextual Language Models: A Review
Tara Safavi
Danai Koutra
KELM
35
51
0
12 Apr 2021
Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning
  Performance of GPT-2
Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-2
Gregor Betz
Kyle Richardson
Christian Voigt
ReLM
LRM
16
29
0
24 Mar 2021
Can Transformers Reason About Effects of Actions?
Can Transformers Reason About Effects of Actions?
Pratyay Banerjee
Chitta Baral
Man Luo
Arindam Mitra
Kuntal Kumar Pal
Tran Cao Son
Neeraj Varshney
LRM
AI4CE
16
10
0
17 Dec 2020
Neural Databases
Neural Databases
James Thorne
Majid Yazdani
Marzieh Saeidi
Fabrizio Silvestri
Sebastian Riedel
A. Halevy
NAI
26
9
0
14 Oct 2020
PRover: Proof Generation for Interpretable Reasoning over Rules
PRover: Proof Generation for Interpretable Reasoning over Rules
Swarnadeep Saha
Sayan Ghosh
Shashank Srivastava
Joey Tianyi Zhou
ReLM
LRM
23
77
0
06 Oct 2020
Critical Thinking for Language Models
Critical Thinking for Language Models
Gregor Betz
Christian Voigt
Kyle Richardson
SyDa
ReLM
LRM
AI4CE
18
35
0
15 Sep 2020
Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason
  Over Implicit Knowledge
Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge
Alon Talmor
Oyvind Tafjord
Peter Clark
Yoav Goldberg
Jonathan Berant
ReLM
LRM
28
39
0
11 Jun 2020
RICA: Evaluating Robust Inference Capabilities Based on Commonsense
  Axioms
RICA: Evaluating Robust Inference Capabilities Based on Commonsense Axioms
Pei Zhou
Rahul Khanna
Seyeon Lee
Bill Yuchen Lin
Daniel E. Ho
Jay Pujara
Xiang Ren
ReLM
19
36
0
02 May 2020
Improving Graph Neural Network Representations of Logical Formulae with
  Subgraph Pooling
Improving Graph Neural Network Representations of Logical Formulae with Subgraph Pooling
M. Crouse
Ibrahim Abdelaziz
Cristina Cornelio
Veronika Thost
Lingfei Wu
Kenneth D. Forbus
Achille Fokoue
NAI
AI4CE
GNN
101
36
0
15 Nov 2019
A Decomposable Attention Model for Natural Language Inference
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
201
1,367
0
06 Jun 2016
Previous
12