ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.10185
  4. Cited By
A Non-Linear Structural Probe

A Non-Linear Structural Probe

North American Chapter of the Association for Computational Linguistics (NAACL), 2021
21 May 2021
Jennifer C. White
Tiago Pimentel
Naomi Saphra
Robert Bamler
ArXiv (abs)PDFHTML

Papers citing "A Non-Linear Structural Probe"

23 / 23 papers shown
Freeze, Diffuse, Decode: Geometry-Aware Adaptation of Pretrained Transformer Embeddings for Antimicrobial Peptide Design
Freeze, Diffuse, Decode: Geometry-Aware Adaptation of Pretrained Transformer Embeddings for Antimicrobial Peptide Design
Pankhil Gawade
Adam Izdebski
Myriam Lizotte
Kevin R. Moon
Jake S. Rhodes
Guy Wolf
Ewa Szczurek
AI4CE
148
0
0
28 Nov 2025
Seed-Induced Uniqueness in Transformer Models: Subspace Alignment Governs Subliminal Transfer
Seed-Induced Uniqueness in Transformer Models: Subspace Alignment Governs Subliminal Transfer
Ayşe Selin Okatan
Mustafa İlhan Akbaş
Laxima Niure Kandel
Berker Peköz
67
0
0
02 Nov 2025
Beyond Linear Probes: Dynamic Safety Monitoring for Language Models
Beyond Linear Probes: Dynamic Safety Monitoring for Language Models
James Oldfield
Juil Sock
Ioannis Patras
Adel Bibi
Fazl Barez
147
2
0
30 Sep 2025
Probing Syntax in Large Language Models: Successes and Remaining Challenges
Probing Syntax in Large Language Models: Successes and Remaining Challenges
Pablo Diego-Simón
Emmanuel Chemla
J. King
Yair Lakretz
272
1
0
05 Aug 2025
The Non-Linear Representation Dilemma: Is Causal Abstraction Enough for Mechanistic Interpretability?
The Non-Linear Representation Dilemma: Is Causal Abstraction Enough for Mechanistic Interpretability?
Denis Sutter
Julian Minder
Thomas Hofmann
Tiago Pimentel
196
9
0
11 Jul 2025
Linguistic Interpretability of Transformer-based Language Models: a systematic review
Linguistic Interpretability of Transformer-based Language Models: a systematic review
Miguel López-Otal
Jorge Gracia
Jordi Bernad
Carlos Bobed
Lucía Pitarch-Ballesteros
Emma Anglés-Herrero
VLM
357
7
0
09 Apr 2025
A polar coordinate system represents syntax in large language models
A polar coordinate system represents syntax in large language modelsNeural Information Processing Systems (NeurIPS), 2024
Pablo Diego-Simón
Stéphane DÁscoli
Emmanuel Chemla
Yair Lakretz
J. King
LLMSV
345
7
0
07 Dec 2024
Probe-Me-Not: Protecting Pre-trained Encoders from Malicious Probing
Probe-Me-Not: Protecting Pre-trained Encoders from Malicious ProbingNetwork and Distributed System Security Symposium (NDSS), 2024
Ruyi Ding
Tong Zhou
Lili Su
A. A. Ding
Xiaolin Xu
Yunsi Fei
AAML
353
2
0
19 Nov 2024
Mechanistic?
Mechanistic?BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackBoxNLP), 2024
Naomi Saphra
Sarah Wiegreffe
AI4CE
260
34
0
07 Oct 2024
A Critical Study of What Code-LLMs (Do Not) Learn
A Critical Study of What Code-LLMs (Do Not) Learn
Abhinav Anand
Shweta Verma
Krishna Narasimhan
Mira Mezini
302
5
0
17 Jun 2024
Non-Linear Inference Time Intervention: Improving LLM Truthfulness
Non-Linear Inference Time Intervention: Improving LLM Truthfulness
Jakub Hoscilowicz
Adam Wiacek
Jan Chojnacki
Adam Cieślak
Leszek Michon
Vitalii Urbanevych
Artur Janicki
KELM
172
5
0
27 Mar 2024
Hitting "Probe"rty with Non-Linearity, and More
Hitting "Probe"rty with Non-Linearity, and More
Avik Pal
Madhura Pawar
190
0
0
25 Feb 2024
Rethinking the Construction of Effective Metrics for Understanding the
  Mechanisms of Pretrained Language Models
Rethinking the Construction of Effective Metrics for Understanding the Mechanisms of Pretrained Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
You Li
Jinhui Yin
Yuming Lin
186
0
0
19 Oct 2023
Disentangling the Linguistic Competence of Privacy-Preserving BERT
Disentangling the Linguistic Competence of Privacy-Preserving BERTBlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP), 2023
Stefan Arnold
Nils Kemmerzell
Annika Schreiner
252
0
0
17 Oct 2023
Arithmetic with Language Models: from Memorization to Computation
Arithmetic with Language Models: from Memorization to ComputationNeural Networks (Neural Netw.), 2023
Davide Maltoni
Matteo Ferrara
KELMLRM
209
12
0
02 Aug 2023
Sociodemographic Bias in Language Models: A Survey and Forward Path
Sociodemographic Bias in Language Models: A Survey and Forward Path
Vipul Gupta
Pranav Narayanan Venkit
Shomir Wilson
R. Passonneau
441
35
0
13 Jun 2023
The Architectural Bottleneck Principle
The Architectural Bottleneck PrincipleConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Tiago Pimentel
Josef Valvoda
Niklas Stoehr
Robert Bamler
181
5
0
11 Nov 2022
Emergent Linguistic Structures in Neural Networks are Fragile
Emergent Linguistic Structures in Neural Networks are Fragile
Emanuele La Malfa
Matthew Wicker
Marta Kiatkowska
632
1
0
31 Oct 2022
AST-Probe: Recovering abstract syntax trees from hidden representations
  of pre-trained language models
AST-Probe: Recovering abstract syntax trees from hidden representations of pre-trained language modelsInternational Conference on Automated Software Engineering (ASE), 2022
José Antonio Hernández López
Martin Weyssow
Jesús Sánchez Cuadrado
H. Sahraoui
172
28
0
23 Jun 2022
Kernelized Concept Erasure
Kernelized Concept ErasureConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Shauli Ravfogel
Francisco Vargas
Yoav Goldberg
Robert Bamler
518
45
0
28 Jan 2022
The Dangers of Underclaiming: Reasons for Caution When Reporting How NLP
  Systems Fail
The Dangers of Underclaiming: Reasons for Caution When Reporting How NLP Systems Fail
Sam Bowman
OffRL
369
48
0
15 Oct 2021
Structural Persistence in Language Models: Priming as a Window into
  Abstract Language Representations
Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations
Arabella J. Sinclair
Jaap Jumelet
Willem H. Zuidema
Raquel Fernández
233
48
0
30 Sep 2021
Conditional probing: measuring usable information beyond a baseline
Conditional probing: measuring usable information beyond a baseline
John Hewitt
Kawin Ethayarajh
Abigail Z. Jacobs
Christopher D. Manning
208
63
0
19 Sep 2021
1
Page 1 of 1