ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22255
  4. Cited By
Train Sparse Autoencoders Efficiently by Utilizing Features Correlation

Train Sparse Autoencoders Efficiently by Utilizing Features Correlation

28 May 2025
Vadim Kurochkin
Yaroslav Aksenov
Daniil Laptev
Daniil Gavrilov
Nikita Balagansky
ArXiv (abs)PDFHTML

Papers citing "Train Sparse Autoencoders Efficiently by Utilizing Features Correlation"

2 / 2 papers shown
Title
SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability
SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability
Adam Karvonen
Can Rager
Johnny Lin
Curt Tigges
Joseph Isaac Bloom
...
Matthew Wearden
Arthur Conmy
Arthur Conmy
Samuel Marks
Neel Nanda
MU
164
23
0
12 Mar 2025
Sparse Autoencoders Can Interpret Randomly Initialized Transformers
Sparse Autoencoders Can Interpret Randomly Initialized Transformers
Thomas Heap
Tim Lawson
Lucy Farnik
Laurence Aitchison
81
17
0
29 Jan 2025
1