ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.03529
  4. Cited By
How to Dissect a Muppet: The Structure of Transformer Embedding Spaces

How to Dissect a Muppet: The Structure of Transformer Embedding Spaces

7 June 2022
Timothee Mickus
Denis Paperno
Mathieu Constant
ArXivPDFHTML

Papers citing "How to Dissect a Muppet: The Structure of Transformer Embedding Spaces"

16 / 16 papers shown
Title
Linguistic Interpretability of Transformer-based Language Models: a systematic review
Linguistic Interpretability of Transformer-based Language Models: a systematic review
Miguel López-Otal
Jorge Gracia
Jordi Bernad
Carlos Bobed
Lucía Pitarch-Ballesteros
Emma Anglés-Herrero
VLM
36
0
0
09 Apr 2025
Cognitive Activation and Chaotic Dynamics in Large Language Models: A Quasi-Lyapunov Analysis of Reasoning Mechanisms
Cognitive Activation and Chaotic Dynamics in Large Language Models: A Quasi-Lyapunov Analysis of Reasoning Mechanisms
Xiaojian Li
Yongkang Leng
Ruiqing Ding
Hangjie Mo
Shanlin Yang
LRM
47
0
0
15 Mar 2025
What are Models Thinking about? Understanding Large Language Model Hallucinations "Psychology" through Model Inner State Analysis
What are Models Thinking about? Understanding Large Language Model Hallucinations "Psychology" through Model Inner State Analysis
Peiran Wang
Yang Liu
Yunfei Lu
Jue Hong
Ye Wu
HILM
LRM
67
0
0
20 Feb 2025
Transformer Normalisation Layers and the Independence of Semantic
  Subspaces
Transformer Normalisation Layers and the Independence of Semantic Subspaces
S. Menary
Samuel Kaski
Andre Freitas
41
2
0
25 Jun 2024
Isotropy, Clusters, and Classifiers
Isotropy, Clusters, and Classifiers
Timothee Mickus
Stig-Arne Gronroos
Joseph Attieh
19
0
0
05 Feb 2024
The Curious Case of Hallucinatory (Un)answerability: Finding Truths in
  the Hidden States of Over-Confident Large Language Models
The Curious Case of Hallucinatory (Un)answerability: Finding Truths in the Hidden States of Over-Confident Large Language Models
Aviv Slobodkin
Omer Goldman
Avi Caciularu
Ido Dagan
Shauli Ravfogel
HILM
LRM
45
24
0
18 Oct 2023
Why bother with geometry? On the relevance of linear decompositions of
  Transformer embeddings
Why bother with geometry? On the relevance of linear decompositions of Transformer embeddings
Timothee Mickus
Raúl Vázquez
15
2
0
10 Oct 2023
Explaining How Transformers Use Context to Build Predictions
Explaining How Transformers Use Context to Build Predictions
Javier Ferrando
Gerard I. Gállego
Ioannis Tsiamas
Marta R. Costa-jussá
18
31
0
21 May 2023
Token-wise Decomposition of Autoregressive Language Model Hidden States
  for Analyzing Model Predictions
Token-wise Decomposition of Autoregressive Language Model Hidden States for Analyzing Model Predictions
Byung-Doh Oh
William Schuler
16
2
0
17 May 2023
Dissecting Recall of Factual Associations in Auto-Regressive Language
  Models
Dissecting Recall of Factual Associations in Auto-Regressive Language Models
Mor Geva
Jasmijn Bastings
Katja Filippova
Amir Globerson
KELM
189
261
0
28 Apr 2023
Jump to Conclusions: Short-Cutting Transformers With Linear
  Transformations
Jump to Conclusions: Short-Cutting Transformers With Linear Transformations
Alexander Yom Din
Taelin Karidi
Leshem Choshen
Mor Geva
11
57
0
16 Mar 2023
Understanding Transformer Memorization Recall Through Idioms
Understanding Transformer Memorization Recall Through Idioms
Adi Haviv
Ido Cohen
Jacob Gidron
R. Schuster
Yoav Goldberg
Mor Geva
24
48
0
07 Oct 2022
Analyzing Transformers in Embedding Space
Analyzing Transformers in Embedding Space
Guy Dar
Mor Geva
Ankit Gupta
Jonathan Berant
10
83
0
06 Sep 2022
All Bark and No Bite: Rogue Dimensions in Transformer Language Models
  Obscure Representational Quality
All Bark and No Bite: Rogue Dimensions in Transformer Language Models Obscure Representational Quality
William Timkey
Marten van Schijndel
213
110
0
09 Sep 2021
The Bottom-up Evolution of Representations in the Transformer: A Study
  with Machine Translation and Language Modeling Objectives
The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives
Elena Voita
Rico Sennrich
Ivan Titov
188
181
0
03 Sep 2019
Effective Approaches to Attention-based Neural Machine Translation
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
214
7,687
0
17 Aug 2015
1