ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.09113
  4. Cited By
Discovering the Compositional Structure of Vector Representations with
  Role Learning Networks
v1v2v3 (latest)

Discovering the Compositional Structure of Vector Representations with Role Learning Networks

BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackBoxNLP), 2019
21 October 2019
Paul Soulos
R. Thomas McCoy
Tal Linzen
P. Smolensky
    CoGe
ArXiv (abs)PDFHTML

Papers citing "Discovering the Compositional Structure of Vector Representations with Role Learning Networks"

50 / 74 papers shown
RelP: Faithful and Efficient Circuit Discovery in Language Models via Relevance Patching
RelP: Faithful and Efficient Circuit Discovery in Language Models via Relevance Patching
F. Jafari
Oliver Eberle
Ashkan Khakzar
Neel Nanda
KELM
338
4
0
28 Aug 2025
Distinct Computations Emerge From Compositional Curricula in In-Context Learning
Distinct Computations Emerge From Compositional Curricula in In-Context Learning
Jin Hwa Lee
Andrew Kyle Lampinen
Aaditya K. Singh
Andrew Saxe
222
3
0
16 Jun 2025
Identifying and Mitigating the Influence of the Prior Distribution in Large Language Models
Identifying and Mitigating the Influence of the Prior Distribution in Large Language Models
Liyi Zhang
Veniamin Veselovsky
R. Thomas McCoy
Thomas Griffiths
243
1
0
17 Apr 2025
Compositional Generalization Across Distributional Shifts with Sparse
  Tree Operations
Compositional Generalization Across Distributional Shifts with Sparse Tree OperationsNeural Information Processing Systems (NeurIPS), 2024
Paul Soulos
Henry Conklin
Mattia Opper
P. Smolensky
Jianfeng Gao
Roland Fernandez
347
7
0
18 Dec 2024
A polar coordinate system represents syntax in large language models
A polar coordinate system represents syntax in large language modelsNeural Information Processing Systems (NeurIPS), 2024
Pablo Diego-Simón
Stéphane DÁscoli
Emmanuel Chemla
Yair Lakretz
J. King
LLMSV
451
14
0
07 Dec 2024
Missed Causes and Ambiguous Effects: Counterfactuals Pose Challenges for
  Interpreting Neural Networks
Missed Causes and Ambiguous Effects: Counterfactuals Pose Challenges for Interpreting Neural Networks
Aaron Mueller
CML
297
20
0
05 Jul 2024
From Frege to chatGPT: Compositionality in language, cognition, and deep neural networks
From Frege to chatGPT: Compositionality in language, cognition, and deep neural networks
Jacob Russin
Sam Whitman McGrath
Danielle J. Williams
AI4CE
669
9
0
24 May 2024
How to use and interpret activation patching
How to use and interpret activation patching
Stefan Heimersheim
Neel Nanda
340
118
0
23 Apr 2024
AtP*: An efficient and scalable method for localizing LLM behaviour to
  components
AtP*: An efficient and scalable method for localizing LLM behaviour to components
János Kramár
Tom Lieberum
Rohin Shah
Neel Nanda
KELM
346
72
0
01 Mar 2024
Faithful Explanations of Black-box NLP Models Using LLM-generated
  Counterfactuals
Faithful Explanations of Black-box NLP Models Using LLM-generated CounterfactualsInternational Conference on Learning Representations (ICLR), 2023
Y. Gat
Nitay Calderon
Amir Feder
Alexander Chapanin
Amit Sharma
Roi Reichart
436
51
0
01 Oct 2023
Towards Best Practices of Activation Patching in Language Models:
  Metrics and Methods
Towards Best Practices of Activation Patching in Language Models: Metrics and MethodsInternational Conference on Learning Representations (ICLR), 2023
Fred Zhang
Neel Nanda
LLMSV
625
218
0
27 Sep 2023
Causal interventions expose implicit situation models for commonsense
  language understanding
Causal interventions expose implicit situation models for commonsense language understandingAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Takateru Yamakoshi
James L. McClelland
A. Goldberg
Robert D. Hawkins
350
10
0
06 Jun 2023
Differentiable Tree Operations Promote Compositional Generalization
Differentiable Tree Operations Promote Compositional GeneralizationInternational Conference on Machine Learning (ICML), 2023
Paul Soulos
J. E. Hu
Kate McCurdy
Yunmo Chen
Roland Fernandez
P. Smolensky
Jianfeng Gao
AI4CE
188
7
0
01 Jun 2023
Semantic Composition in Visually Grounded Language Models
Semantic Composition in Visually Grounded Language Models
Rohan Pandey
CoGe
259
1
0
15 May 2023
Pretrained Embeddings for E-commerce Machine Learning: When it Fails and
  Why?
Pretrained Embeddings for E-commerce Machine Learning: When it Fails and Why?The Web Conference (WWW), 2023
Da Xu
Bo Yang
308
5
0
09 Apr 2023
Syntax-guided Neural Module Distillation to Probe Compositionality in
  Sentence Embeddings
Syntax-guided Neural Module Distillation to Probe Compositionality in Sentence EmbeddingsConference of the European Chapter of the Association for Computational Linguistics (EACL), 2023
Rohan Pandey
458
1
0
21 Jan 2023
Why is Winoground Hard? Investigating Failures in Visuolinguistic
  Compositionality
Why is Winoground Hard? Investigating Failures in Visuolinguistic CompositionalityConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Anuj Diwan
Layne Berry
Eunsol Choi
David Harwath
Kyle Mahowald
CoGe
525
51
0
01 Nov 2022
Are Representations Built from the Ground Up? An Empirical Examination
  of Local Composition in Language Models
Are Representations Built from the Ground Up? An Empirical Examination of Local Composition in Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Emmy Liu
Graham Neubig
CoGe
286
11
0
07 Oct 2022
Causal Proxy Models for Concept-Based Model Explanations
Causal Proxy Models for Concept-Based Model ExplanationsInternational Conference on Machine Learning (ICML), 2022
Zhengxuan Wu
Karel DÓosterlinck
Atticus Geiger
Amir Zur
Christopher Potts
MILM
282
40
0
28 Sep 2022
Structural Biases for Improving Transformers on Translation into
  Morphologically Rich Languages
Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages
Paul Soulos
Sudha Rao
Caitlin Smith
Eric Rosen
Asli Celikyilmaz
...
Coleman Haley
Roland Fernandez
Hamid Palangi
Jianfeng Gao
P. Smolensky
292
7
0
11 Aug 2022
CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model
  Behavior
CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model BehaviorNeural Information Processing Systems (NeurIPS), 2022
Eldar David Abraham
Karel DÓosterlinck
Amir Feder
Y. Gat
Atticus Geiger
Christopher Potts
Roi Reichart
Zhengxuan Wu
CML
398
60
0
27 May 2022
Neurocompositional computing: From the Central Paradox of Cognition to a
  new generation of AI systems
Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systemsThe AI Magazine (AI Mag.), 2022
P. Smolensky
R. Thomas McCoy
Roland Fernandez
Matthew A. Goldrick
Jia-Hao Gao
245
75
0
02 May 2022
Inducing Causal Structure for Interpretable Neural Networks
Inducing Causal Structure for Interpretable Neural Networks
Atticus Geiger
Zhengxuan Wu
Hanson Lu
J. Rozner
Elisa Kreiss
Thomas Icard
Noah D. Goodman
Christopher Potts
CMLOOD
536
104
0
01 Dec 2021
Distributionally Robust Recurrent Decoders with Random Network
  Distillation
Distributionally Robust Recurrent Decoders with Random Network DistillationWorkshop on Representation Learning for NLP (RepL4NLP), 2021
Antonio Valerio Miceli Barone
Alexandra Birch
Rico Sennrich
327
2
0
25 Oct 2021
General Cross-Architecture Distillation of Pretrained Language Models
  into Matrix Embeddings
General Cross-Architecture Distillation of Pretrained Language Models into Matrix Embeddings
Lukas Galke
Isabelle Cuber
Christophe Meyer
Henrik Ferdinand Nolscher
Angelina Sonderecker
A. Scherp
306
2
0
17 Sep 2021
Causal Abstractions of Neural Networks
Causal Abstractions of Neural NetworksNeural Information Processing Systems (NeurIPS), 2021
Atticus Geiger
Hanson Lu
Thomas Icard
Christopher Potts
NAICML
447
342
0
06 Jun 2021
Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky ProbingNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021
Rowan Hall Maudslay
Robert Bamler
239
42
0
04 Jun 2021
Enriching Transformers with Structured Tensor-Product Representations
  for Abstractive Summarization
Enriching Transformers with Structured Tensor-Product Representations for Abstractive SummarizationNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021
Yichen Jiang
Asli Celikyilmaz
P. Smolensky
Paul Soulos
Sudha Rao
Hamid Palangi
Roland Fernandez
Caitlin Smith
Joey Tianyi Zhou
Jianfeng Gao
195
23
0
02 Jun 2021
Neuro-Symbolic Representations for Video Captioning: A Case for
  Leveraging Inductive Biases for Vision and Language
Neuro-Symbolic Representations for Video Captioning: A Case for Leveraging Inductive Biases for Vision and Language
Hassan Akbari
Hamid Palangi
Jianwei Yang
Sudha Rao
Asli Celikyilmaz
Roland Fernandez
P. Smolensky
Jianfeng Gao
Shih-Fu Chang
259
3
0
18 Nov 2020
Compositional Explanations of Neurons
Compositional Explanations of NeuronsNeural Information Processing Systems (NeurIPS), 2020
Jesse Mu
Jacob Andreas
FAttCoGeMILM
391
212
0
24 Jun 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot LearnersNeural Information Processing Systems (NeurIPS), 2020
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
2.4K
56,453
0
28 May 2020
Probing Linguistic Systematicity
Probing Linguistic Systematicity
Emily Goodwin
Koustuv Sinha
Timothy J. O'Donnell
376
61
0
08 May 2020
A Systematic Assessment of Syntactic Generalization in Neural Language
  Models
A Systematic Assessment of Syntactic Generalization in Neural Language Models
Jennifer Hu
Jon Gauthier
Peng Qian
Ethan Gotlieb Wilcox
R. Levy
ELM
418
257
0
07 May 2020
Probing the Probing Paradigm: Does Probing Accuracy Entail Task
  Relevance?
Probing the Probing Paradigm: Does Probing Accuracy Entail Task Relevance?Conference of the European Chapter of the Association for Computational Linguistics (EACL), 2020
Abhilasha Ravichander
Yonatan Belinkov
Eduard H. Hovy
314
124
0
02 May 2020
Information-Theoretic Probing with Minimum Description Length
Information-Theoretic Probing with Minimum Description LengthConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Elena Voita
Ivan Titov
380
301
0
27 Mar 2020
BLiMP: The Benchmark of Linguistic Minimal Pairs for English
BLiMP: The Benchmark of Linguistic Minimal Pairs for EnglishTransactions of the Association for Computational Linguistics (TACL), 2019
Alex Warstadt
Alicia Parrish
Haokun Liu
Anhad Mohananey
Wei Peng
Sheng-Fu Wang
Samuel R. Bowman
608
683
0
02 Dec 2019
Compositional Generalization for Primitive Substitutions
Compositional Generalization for Primitive SubstitutionsConference on Empirical Methods in Natural Language Processing (EMNLP), 2019
Yuanpeng Li
Bo Pan
Jianyu Wang
Joel Hestness
236
89
0
07 Oct 2019
Analyzing machine-learned representations: A natural language case study
Analyzing machine-learned representations: A natural language case studyCognitive Sciences (CS), 2019
Ishita Dasgupta
Demi Guo
S. Gershman
Noah D. Goodman
NAI
187
13
0
12 Sep 2019
Compositionality decomposed: how do neural networks generalise?
Compositionality decomposed: how do neural networks generalise?Journal of Artificial Intelligence Research (JAIR), 2019
Dieuwke Hupkes
Verna Dankers
Mathijs Mul
Elia Bruni
CoGe
535
384
0
22 Aug 2019
Blackbox meets blackbox: Representational Similarity and Stability
  Analysis of Neural Language Models and Brains
Blackbox meets blackbox: Representational Similarity and Stability Analysis of Neural Language Models and Brains
Samira Abnar
Lisa Beinborn
Rochelle Choenni
Willem H. Zuidema
419
81
0
04 Jun 2019
What do you learn from context? Probing for sentence structure in
  contextualized word representations
What do you learn from context? Probing for sentence structure in contextualized word representationsInternational Conference on Learning Representations (ICLR), 2019
Ian Tenney
Patrick Xia
Berlin Chen
Alex Jinpeng Wang
Adam Poliak
...
Najoung Kim
Benjamin Van Durme
Samuel R. Bowman
Dipanjan Das
Ellie Pavlick
626
943
0
15 May 2019
Correlating neural and symbolic representations of language
Correlating neural and symbolic representations of languageAnnual Meeting of the Association for Computational Linguistics (ACL), 2019
Grzegorz Chrupała
Afra Alishahi
NAI
304
74
0
14 May 2019
Compositional generalization in a deep seq2seq model by separating
  syntax and semantics
Compositional generalization in a deep seq2seq model by separating syntax and semantics
Jacob Russin
Jason Jo
R. C. O'Reilly
Yoshua Bengio
315
104
0
22 Apr 2019
The emergence of number and syntax units in LSTM language models
The emergence of number and syntax units in LSTM language modelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2019
Yair Lakretz
Germán Kruszewski
T. Desbordes
Dieuwke Hupkes
S. Dehaene
Marco Baroni
406
183
0
18 Mar 2019
Measuring Compositionality in Representation Learning
Measuring Compositionality in Representation Learning
Jacob Andreas
CoGe
296
162
0
19 Feb 2019
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural
  Language Inference
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference
R. Thomas McCoy
Ellie Pavlick
Tal Linzen
1.3K
1,364
0
04 Feb 2019
Analysis Methods in Neural Language Processing: A Survey
Analysis Methods in Neural Language Processing: A Survey
Yonatan Belinkov
James R. Glass
382
618
0
21 Dec 2018
RNNs Implicitly Implement Tensor Product Representations
RNNs Implicitly Implement Tensor Product Representations
R. Thomas McCoy
Tal Linzen
Ewan Dunbar
P. Smolensky
182
60
0
20 Dec 2018
Ordered Neurons: Integrating Tree Structures into Recurrent Neural
  Networks
Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks
Songlin Yang
Shawn Tan
Alessandro Sordoni
Aaron Courville
559
345
0
22 Oct 2018
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLMSSLSSeg
3.1K
112,756
0
11 Oct 2018
12
Next
Page 1 of 2