Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2304.00195
Cited By
Abstractors and relational cross-attention: An inductive bias for explicit relational reasoning in Transformers
1 April 2023
Awni Altabaa
Taylor Webb
Jonathan D. Cohen
John Lafferty
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Abstractors and relational cross-attention: An inductive bias for explicit relational reasoning in Transformers"
7 / 7 papers shown
Title
Convolutional Neural Networks Can (Meta-)Learn the Same-Different Relation
Max Gupta
Sunayana Rane
R. Thomas McCoy
Thomas L. Griffiths
SSL
OOD
DRL
182
0
0
29 Mar 2025
Emergent Symbolic Mechanisms Support Abstract Reasoning in Large Language Models
Yukang Yang
Declan Campbell
Kaixuan Huang
Mengdi Wang
Jonathan D. Cohen
Taylor Webb
LRM
67
2
0
27 Feb 2025
Disentangling and Integrating Relational and Sensory Information in Transformer Architectures
Awni Altabaa
John Lafferty
37
3
0
26 May 2024
Approximation of relation functions and attention mechanisms
Awni Altabaa
John Lafferty
30
6
0
13 Feb 2024
Learning Hierarchical Relational Representations through Relational Convolutions
Awni Altabaa
John Lafferty
30
2
0
05 Oct 2023
The Relational Bottleneck as an Inductive Bias for Efficient Abstraction
Taylor Webb
Steven M. Frankland
Awni Altabaa
Simon N. Segert
Kamesh Krishnamurthy
...
Tyler Giallanza
Zack Dulberg
Randall O'Reilly
John Lafferty
Jonathan D. Cohen
34
27
0
12 Sep 2023
Emergent Symbols through Binding in External Memory
Taylor Webb
I. Sinha
J. Cohen
67
64
0
29 Dec 2020
1