ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.03575
  4. Cited By
Are Representations Built from the Ground Up? An Empirical Examination
  of Local Composition in Language Models
v1v2 (latest)

Are Representations Built from the Ground Up? An Empirical Examination of Local Composition in Language Models

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
7 October 2022
Emmy Liu
Graham Neubig
    CoGe
ArXiv (abs)PDFHTML

Papers citing "Are Representations Built from the Ground Up? An Empirical Examination of Local Composition in Language Models"

10 / 10 papers shown
Set-Theoretic Compositionality of Sentence Embeddings
Set-Theoretic Compositionality of Sentence Embeddings
Naman Bansal
Yash Mahajan
Sanjeev Kumar Sinha
S. Karmaker
CoGe
257
0
0
28 Feb 2025
Investigating Idiomaticity in Word Representations
Investigating Idiomaticity in Word RepresentationsComputational Linguistics (CL), 2024
Wei He
Tiago Kramer Vieira
Marcos García
Carolina Scarton
M. Idiart
Aline Villavicencio
282
1
0
04 Nov 2024
Semantics of Multiword Expressions in Transformer-Based Models: A Survey
Semantics of Multiword Expressions in Transformer-Based Models: A SurveyTransactions of the Association for Computational Linguistics (TACL), 2024
Filip Miletić
Sabine Schulte im Walde
270
12
0
27 Jan 2024
Assessing Logical Reasoning Capabilities of Encoder-Only Transformer
  Models
Assessing Logical Reasoning Capabilities of Encoder-Only Transformer Models
Paulo Pirozelli
M. M. José
Paulo de Tarso P. Filho
A. Brandão
Fabio Gagliardi Cozman
LRMELM
320
4
0
18 Dec 2023
Transformers are uninterpretable with myopic methods: a case study with
  bounded Dyck grammars
Transformers are uninterpretable with myopic methods: a case study with bounded Dyck grammarsNeural Information Processing Systems (NeurIPS), 2023
Kaiyue Wen
Yuchen Li
Bing Liu
Andrej Risteski
282
27
0
03 Dec 2023
Divergences between Language Models and Human Brains
Divergences between Language Models and Human Brains
Yuchen Zhou
Emmy Liu
Graham Neubig
Michael J. Tarr
Leila Wehbe
312
5
0
15 Nov 2023
Unified Representation for Non-compositional and Compositional
  Expressions
Unified Representation for Non-compositional and Compositional ExpressionsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Ziheng Zeng
Suma Bhat
191
3
0
29 Oct 2023
Bridging Continuous and Discrete Spaces: Interpretable Sentence
  Representation Learning via Compositional Operations
Bridging Continuous and Discrete Spaces: Interpretable Sentence Representation Learning via Compositional OperationsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
James Y. Huang
Wenlin Yao
Kaiqiang Song
Hongming Zhang
Muhao Chen
Dong Yu
154
7
0
24 May 2023
Construction Grammar Provides Unique Insight into Neural Language Models
Construction Grammar Provides Unique Insight into Neural Language Models
Leonie Weissweiler
Taiqi He
Naoki Otani
David R. Mortensen
Lori S. Levin
Hinrich Schütze
206
16
0
04 Feb 2023
Discovering the Compositional Structure of Vector Representations with
  Role Learning Networks
Discovering the Compositional Structure of Vector Representations with Role Learning NetworksBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackBoxNLP), 2019
Paul Soulos
R. Thomas McCoy
Tal Linzen
P. Smolensky
CoGe
362
46
0
21 Oct 2019
1