ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.06629
18
19

The Relational Bottleneck as an Inductive Bias for Efficient Abstraction

12 September 2023
Taylor W. Webb
Steven M. Frankland
Awni Altabaa
Simon N. Segert
Kamesh Krishnamurthy
Declan Campbell
Jacob Russin
Tyler Giallanza
Zack Dulberg
Randall O'Reilly
John Lafferty
Jonathan D. Cohen
ArXivPDFHTML
Abstract

A central challenge for cognitive science is to explain how abstract concepts are acquired from limited experience. This has often been framed in terms of a dichotomy between connectionist and symbolic cognitive models. Here, we highlight a recently emerging line of work that suggests a novel reconciliation of these approaches, by exploiting an inductive bias that we term the relational bottleneck. In that approach, neural networks are constrained via their architecture to focus on relations between perceptual inputs, rather than the attributes of individual inputs. We review a family of models that employ this approach to induce abstractions in a data-efficient manner, emphasizing their potential as candidate models for the acquisition of abstract concepts in the human mind and brain.

View on arXiv
Comments on this paper