ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.09113
  4. Cited By
Discovering the Compositional Structure of Vector Representations with
  Role Learning Networks

Discovering the Compositional Structure of Vector Representations with Role Learning Networks

21 October 2019
Paul Soulos
R. Thomas McCoy
Tal Linzen
P. Smolensky
    CoGe
ArXivPDFHTML

Papers citing "Discovering the Compositional Structure of Vector Representations with Role Learning Networks"

34 / 34 papers shown
Title
Identifying and Mitigating the Influence of the Prior Distribution in Large Language Models
Identifying and Mitigating the Influence of the Prior Distribution in Large Language Models
Liyi Zhang
Veniamin Veselovsky
R. Thomas McCoy
Thomas L. Griffiths
52
0
0
17 Apr 2025
Compositional Generalization Across Distributional Shifts with Sparse
  Tree Operations
Compositional Generalization Across Distributional Shifts with Sparse Tree Operations
Paul Soulos
Henry Conklin
Mattia Opper
P. Smolensky
Jianfeng Gao
Roland Fernandez
68
4
0
18 Dec 2024
A polar coordinate system represents syntax in large language models
A polar coordinate system represents syntax in large language models
Pablo Diego-Simón
Stéphane DÁscoli
Emmanuel Chemla
Yair Lakretz
J. King
LLMSV
65
0
0
07 Dec 2024
Missed Causes and Ambiguous Effects: Counterfactuals Pose Challenges for
  Interpreting Neural Networks
Missed Causes and Ambiguous Effects: Counterfactuals Pose Challenges for Interpreting Neural Networks
Aaron Mueller
CML
28
10
0
05 Jul 2024
From Frege to chatGPT: Compositionality in language, cognition, and deep
  neural networks
From Frege to chatGPT: Compositionality in language, cognition, and deep neural networks
Jacob Russin
Sam Whitman McGrath
Danielle J. Williams
Lotem Elber-Dorozko
AI4CE
66
3
0
24 May 2024
How to use and interpret activation patching
How to use and interpret activation patching
Stefan Heimersheim
Neel Nanda
25
37
0
23 Apr 2024
AtP*: An efficient and scalable method for localizing LLM behaviour to
  components
AtP*: An efficient and scalable method for localizing LLM behaviour to components
János Kramár
Tom Lieberum
Rohin Shah
Neel Nanda
KELM
43
42
0
01 Mar 2024
Faithful Explanations of Black-box NLP Models Using LLM-generated
  Counterfactuals
Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals
Y. Gat
Nitay Calderon
Amir Feder
Alexander Chapanin
Amit Sharma
Roi Reichart
18
28
0
01 Oct 2023
Towards Best Practices of Activation Patching in Language Models:
  Metrics and Methods
Towards Best Practices of Activation Patching in Language Models: Metrics and Methods
Fred Zhang
Neel Nanda
LLMSV
28
97
0
27 Sep 2023
Causal interventions expose implicit situation models for commonsense
  language understanding
Causal interventions expose implicit situation models for commonsense language understanding
Takateru Yamakoshi
James L. McClelland
A. Goldberg
Robert D. Hawkins
17
6
0
06 Jun 2023
Differentiable Tree Operations Promote Compositional Generalization
Differentiable Tree Operations Promote Compositional Generalization
Paul Soulos
J. E. Hu
Kate McCurdy
Yunmo Chen
Roland Fernandez
P. Smolensky
Jianfeng Gao
AI4CE
14
7
0
01 Jun 2023
Semantic Composition in Visually Grounded Language Models
Semantic Composition in Visually Grounded Language Models
Rohan Pandey
CoGe
16
1
0
15 May 2023
Pretrained Embeddings for E-commerce Machine Learning: When it Fails and
  Why?
Pretrained Embeddings for E-commerce Machine Learning: When it Fails and Why?
Da Xu
Bo Yang
17
3
0
09 Apr 2023
Syntax-guided Neural Module Distillation to Probe Compositionality in
  Sentence Embeddings
Syntax-guided Neural Module Distillation to Probe Compositionality in Sentence Embeddings
Rohan Pandey
11
1
0
21 Jan 2023
Why is Winoground Hard? Investigating Failures in Visuolinguistic
  Compositionality
Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality
Anuj Diwan
Layne Berry
Eunsol Choi
David F. Harwath
Kyle Mahowald
CoGe
101
41
0
01 Nov 2022
Are Representations Built from the Ground Up? An Empirical Examination
  of Local Composition in Language Models
Are Representations Built from the Ground Up? An Empirical Examination of Local Composition in Language Models
Emmy Liu
Graham Neubig
CoGe
13
10
0
07 Oct 2022
Causal Proxy Models for Concept-Based Model Explanations
Causal Proxy Models for Concept-Based Model Explanations
Zhengxuan Wu
Karel DÓosterlinck
Atticus Geiger
Amir Zur
Christopher Potts
MILM
71
35
0
28 Sep 2022
Structural Biases for Improving Transformers on Translation into
  Morphologically Rich Languages
Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages
Paul Soulos
Sudha Rao
Caitlin Smith
Eric Rosen
Asli Celikyilmaz
...
Coleman Haley
Roland Fernandez
Hamid Palangi
Jianfeng Gao
P. Smolensky
14
6
0
11 Aug 2022
CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model
  Behavior
CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior
Eldar David Abraham
Karel DÓosterlinck
Amir Feder
Y. Gat
Atticus Geiger
Christopher Potts
Roi Reichart
Zhengxuan Wu
CML
28
43
0
27 May 2022
Neurocompositional computing: From the Central Paradox of Cognition to a
  new generation of AI systems
Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems
P. Smolensky
R. Thomas McCoy
Roland Fernandez
Matthew A. Goldrick
Jia-Hao Gao
16
62
0
02 May 2022
Inducing Causal Structure for Interpretable Neural Networks
Inducing Causal Structure for Interpretable Neural Networks
Atticus Geiger
Zhengxuan Wu
Hanson Lu
J. Rozner
Elisa Kreiss
Thomas F. Icard
Noah D. Goodman
Christopher Potts
CML
OOD
16
70
0
01 Dec 2021
Distributionally Robust Recurrent Decoders with Random Network
  Distillation
Distributionally Robust Recurrent Decoders with Random Network Distillation
Antonio Valerio Miceli Barone
Alexandra Birch
Rico Sennrich
23
1
0
25 Oct 2021
General Cross-Architecture Distillation of Pretrained Language Models
  into Matrix Embeddings
General Cross-Architecture Distillation of Pretrained Language Models into Matrix Embeddings
Lukas Galke
Isabelle Cuber
Christophe Meyer
Henrik Ferdinand Nolscher
Angelina Sonderecker
A. Scherp
28
2
0
17 Sep 2021
Causal Abstractions of Neural Networks
Causal Abstractions of Neural Networks
Atticus Geiger
Hanson Lu
Thomas F. Icard
Christopher Potts
NAI
CML
8
216
0
06 Jun 2021
Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Rowan Hall Maudslay
Ryan Cotterell
23
33
0
04 Jun 2021
Enriching Transformers with Structured Tensor-Product Representations
  for Abstractive Summarization
Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization
Yichen Jiang
Asli Celikyilmaz
P. Smolensky
Paul Soulos
Sudha Rao
Hamid Palangi
Roland Fernandez
Caitlin Smith
Mohit Bansal
Jianfeng Gao
16
19
0
02 Jun 2021
Neuro-Symbolic Representations for Video Captioning: A Case for
  Leveraging Inductive Biases for Vision and Language
Neuro-Symbolic Representations for Video Captioning: A Case for Leveraging Inductive Biases for Vision and Language
Hassan Akbari
Hamid Palangi
Jianwei Yang
Sudha Rao
Asli Celikyilmaz
Roland Fernandez
P. Smolensky
Jianfeng Gao
Shih-Fu Chang
24
3
0
18 Nov 2020
Probing Linguistic Systematicity
Probing Linguistic Systematicity
Emily Goodwin
Koustuv Sinha
Timothy J. O'Donnell
91
58
0
08 May 2020
Compositionality decomposed: how do neural networks generalise?
Compositionality decomposed: how do neural networks generalise?
Dieuwke Hupkes
Verna Dankers
Mathijs Mul
Elia Bruni
CoGe
17
320
0
22 Aug 2019
On learning an interpreted language with recurrent models
On learning an interpreted language with recurrent models
Denis Paperno
11
4
0
11 Sep 2018
What you can cram into a single vector: Probing sentence embeddings for
  linguistic properties
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
199
882
0
03 May 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,950
0
20 Apr 2018
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,740
0
26 Sep 2016
A Decomposable Attention Model for Natural Language Inference
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
196
1,367
0
06 Jun 2016
1