ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2511.00180
116
0

ParaScopes: What do Language Models Activations Encode About Future Text?

31 October 2025
Nicky Pochinkov
Yulia Volkova
Anna Vasileva
Sai V R Chereddy
    LLMSV
ArXiv (abs)PDFHTML
Main:9 Pages
14 Figures
Bibliography:3 Pages
12 Tables
Appendix:12 Pages
Abstract

Interpretability studies in language models often investigate forward-looking representations of activations. However, as language models become capable of doing ever longer time horizon tasks, methods for understanding activations often remain limited to testing specific concepts or tokens. We develop a framework of Residual Stream Decoders as a method of probing model activations for paragraph-scale and document-scale plans. We test several methods and find information can be decoded equivalent to 5+ tokens of future context in small models. These results lay the groundwork for better monitoring of language models and better understanding how they might encode longer-term planning information.

View on arXiv
Comments on this paper