ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.01128
  4. Cited By
Neurocompositional computing: From the Central Paradox of Cognition to a
  new generation of AI systems

Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems

2 May 2022
P. Smolensky
R. Thomas McCoy
Roland Fernandez
Matthew A. Goldrick
Jia-Hao Gao
ArXivPDFHTML

Papers citing "Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems"

9 / 9 papers shown
Title
Human-like conceptual representations emerge from language prediction
Human-like conceptual representations emerge from language prediction
Ningyu Xu
Qi Zhang
Chao Du
Qiang Luo
Xipeng Qiu
Xuanjing Huang
Menghan Zhang
70
0
0
21 Jan 2025
From Frege to chatGPT: Compositionality in language, cognition, and deep
  neural networks
From Frege to chatGPT: Compositionality in language, cognition, and deep neural networks
Jacob Russin
Sam Whitman McGrath
Danielle J. Williams
Lotem Elber-Dorozko
AI4CE
73
3
0
24 May 2024
Neural-Logic Human-Object Interaction Detection
Neural-Logic Human-Object Interaction Detection
Liulei Li
Jianan Wei
Wenguan Wang
Yi Yang
36
16
0
16 Nov 2023
HICO-DET-SG and V-COCO-SG: New Data Splits for Evaluating the Systematic
  Generalization Performance of Human-Object Interaction Detection Models
HICO-DET-SG and V-COCO-SG: New Data Splits for Evaluating the Systematic Generalization Performance of Human-Object Interaction Detection Models
Kenta Takemoto
Moyuru Yamada
Tomotake Sasaki
H. Akima
37
0
0
17 May 2023
The Debate Over Understanding in AI's Large Language Models
The Debate Over Understanding in AI's Large Language Models
Melanie Mitchell
D. Krakauer
ELM
74
202
0
14 Oct 2022
Are Representations Built from the Ground Up? An Empirical Examination
  of Local Composition in Language Models
Are Representations Built from the Ground Up? An Empirical Examination of Local Composition in Language Models
Emmy Liu
Graham Neubig
CoGe
13
10
0
07 Oct 2022
Systematic Generalization and Emergent Structures in Transformers
  Trained on Structured Tasks
Systematic Generalization and Emergent Structures in Transformers Trained on Structured Tasks
Yuxuan Li
James L. McClelland
33
17
0
02 Oct 2022
A Survey on Hyperdimensional Computing aka Vector Symbolic
  Architectures, Part II: Applications, Cognitive Models, and Challenges
A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and Challenges
Denis Kleyko
D. Rachkovskij
Evgeny Osipov
A. Rahim
21
126
0
12 Nov 2021
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
243
1,450
0
18 Mar 2020
1