ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.08994
  4. Cited By
ReaSCAN: Compositional Reasoning in Language Grounding

ReaSCAN: Compositional Reasoning in Language Grounding

18 September 2021
Zhengxuan Wu
Elisa Kreiss
Desmond C. Ong
Christopher Potts
    CoGeLRM
ArXiv (abs)PDFHTML

Papers citing "ReaSCAN: Compositional Reasoning in Language Grounding"

10 / 10 papers shown
Natural Language Satisfiability: Exploring the Problem Distribution and Evaluating Transformer-based Language Models
Natural Language Satisfiability: Exploring the Problem Distribution and Evaluating Transformer-based Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Tharindu Madusanka
Ian Pratt-Hartmann
Riza Batista-Navarro
LRM
128
3
0
23 Aug 2025
LLM-A*: Large Language Model Enhanced Incremental Heuristic Search on Path Planning
LLM-A*: Large Language Model Enhanced Incremental Heuristic Search on Path Planning
Silin Meng
Yiwei Wang
Cheng-Fu Yang
Nanyun Peng
Kai-Wei Chang
452
74
0
20 Jun 2024
Can LLM find the green circle? Investigation and Human-guided tool
  manipulation for compositional generalization
Can LLM find the green circle? Investigation and Human-guided tool manipulation for compositional generalizationIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023
Min Zhang
Jianfeng He
Shuo Lei
Murong Yue
Linhan Wang
Chang-Tien Lu
273
6
0
12 Dec 2023
Is Feedback All You Need? Leveraging Natural Language Feedback in
  Goal-Conditioned Reinforcement Learning
Is Feedback All You Need? Leveraging Natural Language Feedback in Goal-Conditioned Reinforcement Learning
Sabrina McCallum
Max Taylor-Davies
Stefano V. Albrecht
Alessandro Suglia
247
3
0
07 Dec 2023
When Can Transformers Ground and Compose: Insights from Compositional
  Generalization Benchmarks
When Can Transformers Ground and Compose: Insights from Compositional Generalization BenchmarksConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Ankur Sikarwar
Arkil Patel
Navin Goyal
ViT
296
11
0
23 Oct 2022
Trust in Language Grounding: a new AI challenge for human-robot teams
Trust in Language Grounding: a new AI challenge for human-robot teams
David M. Bossens
C. Evers
282
1
0
05 Sep 2022
Pushing the Limits of Rule Reasoning in Transformers through Natural
  Language Satisfiability
Pushing the Limits of Rule Reasoning in Transformers through Natural Language Satisfiability
Kyle Richardson
Ashish Sabharwal
ReLMLRM
268
28
0
16 Dec 2021
Inducing Causal Structure for Interpretable Neural Networks
Inducing Causal Structure for Interpretable Neural Networks
Atticus Geiger
Zhengxuan Wu
Hanson Lu
J. Rozner
Elisa Kreiss
Thomas Icard
Noah D. Goodman
Christopher Potts
CMLOOD
497
100
0
01 Dec 2021
Dyna-bAbI: unlocking bAbI's potential with dynamic synthetic
  benchmarking
Dyna-bAbI: unlocking bAbI's potential with dynamic synthetic benchmarking
Ronen Tamari
Kyle Richardson
Aviad Sar-Shalom
Noam Kahlon
Nelson F. Liu
Reut Tsarfaty
Dafna Shahaf
368
6
0
30 Nov 2021
Relational reasoning and generalization using non-symbolic neural
  networks
Relational reasoning and generalization using non-symbolic neural networksAnnual Meeting of the Cognitive Science Society (CogSci), 2020
Atticus Geiger
Alexandra Carstensen Michael C. Frank
Michael C. Frank
Christopher Potts
NAI
447
25
0
14 Jun 2020
1
Page 1 of 1