ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.16985
  4. Cited By
Unveiling LLM Mechanisms Through Neural ODEs and Control Theory

Unveiling LLM Mechanisms Through Neural ODEs and Control Theory

23 June 2024
Yukun Zhang
Qi Dong
ArXivPDFHTML

Papers citing "Unveiling LLM Mechanisms Through Neural ODEs and Control Theory"

2 / 2 papers shown
Title
Rethinking Interpretability in the Era of Large Language Models
Rethinking Interpretability in the Era of Large Language Models
Chandan Singh
J. Inala
Michel Galley
Rich Caruana
Jianfeng Gao
LRM
AI4CE
71
59
0
30 Jan 2024
Interpretability in the Wild: a Circuit for Indirect Object
  Identification in GPT-2 small
Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small
Kevin Wang
Alexandre Variengien
Arthur Conmy
Buck Shlegeris
Jacob Steinhardt
207
486
0
01 Nov 2022
1