ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.06640
  4. Cited By
Experimental Contexts Can Facilitate Robust Semantic Property Inference
  in Language Models, but Inconsistently

Experimental Contexts Can Facilitate Robust Semantic Property Inference in Language Models, but Inconsistently

12 January 2024
Kanishka Misra
Allyson Ettinger
Kyle Mahowald
ArXivPDFHTML

Papers citing "Experimental Contexts Can Facilitate Robust Semantic Property Inference in Language Models, but Inconsistently"

4 / 4 papers shown
Title
How does GPT-2 compute greater-than?: Interpreting mathematical
  abilities in a pre-trained language model
How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model
Michael Hanna
Ollie Liu
Alexandre Variengien
LRM
184
116
0
30 Apr 2023
Interpretability in the Wild: a Circuit for Indirect Object
  Identification in GPT-2 small
Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small
Kevin Wang
Alexandre Variengien
Arthur Conmy
Buck Shlegeris
Jacob Steinhardt
210
486
0
01 Nov 2022
Structural Persistence in Language Models: Priming as a Window into
  Abstract Language Representations
Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations
Arabella J. Sinclair
Jaap Jumelet
Willem H. Zuidema
Raquel Fernández
49
37
0
30 Sep 2021
Sorting through the noise: Testing robustness of information processing
  in pre-trained language models
Sorting through the noise: Testing robustness of information processing in pre-trained language models
Lalchand Pandia
Allyson Ettinger
34
37
0
25 Sep 2021
1