ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.00208
80
19
v1v2 (latest)

A Primer on the Inner Workings of Transformer-based Language Models

30 April 2024
Javier Ferrando
Gabriele Sarti
Arianna Bisazza
Marta R. Costa-jussá
    KELM
ArXiv (abs)PDFHTMLHuggingFace (10 upvotes)Github (579★)
Main:31 Pages
16 Figures
Bibliography:25 Pages
1 Tables
Appendix:1 Pages
Abstract

The rapid progress of research aimed at interpreting the inner workings of advanced language models has highlighted a need for contextualizing the insights gained from years of work in this area. This primer provides a concise technical introduction to the current techniques used to interpret the inner workings of Transformer-based language models, focusing on the generative decoder-only architecture. We conclude by presenting a comprehensive overview of the known internal mechanisms implemented by these models, uncovering connections across popular approaches and active research directions in this area.

View on arXiv
Comments on this paper