ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17769
44
1

Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models

23 May 2025
Patrick Leask
Neel Nanda
Noura Al Moubayed
ArXivPDFHTML
Abstract

Sparse autoencoders (SAEs) are a popular method for decomposing Large Langage Models (LLM) activations into interpretable latents. However, due to their substantial training cost, most academic research uses open-source SAEs which are only available for a restricted set of models of up to 27B parameters. SAE latents are also learned from a dataset of activations, which means they do not transfer between models. Motivated by relative representation similarity measures, we introduce Inference-Time Decomposition of Activations (ITDA) models, an alternative method for decomposing language model activations. To train an ITDA, we greedily construct a dictionary of language model activations on a dataset of prompts, selecting those activations which were worst approximated by matching pursuit on the existing dictionary. ITDAs can be trained in just 1\% of the time required for SAEs, using 1\% of the data. This allowed us to train ITDAs on Llama-3.1 70B and 405B on a single consumer GPU. ITDAs can achieve similar reconstruction performance to SAEs on some target LLMs, but generally incur a performance penalty. However, ITDA dictionaries enable cross-model comparisons, and a simple Jaccard similarity index on ITDA dictionaries outperforms existing methods like CKA, SVCCA, and relative representation similarity metrics. ITDAs provide a cheap alternative to SAEs where computational resources are limited, or when cross model comparisons are necessary. Code available atthis https URL.

View on arXiv
@article{leask2025_2505.17769,
  title={ Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models },
  author={ Patrick Leask and Neel Nanda and Noura Al Moubayed },
  journal={arXiv preprint arXiv:2505.17769},
  year={ 2025 }
}
Comments on this paper