ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.18644
38
0

Steered Generation via Gradient Descent on Sparse Features

25 February 2025
Sumanta Bhattacharyya
Pedram Rooshenas
    LLMSV
ArXivPDFHTML
Abstract

Large language models (LLMs) encode a diverse range of linguistic features within their latent representations, which can be harnessed to steer their output toward specific target characteristics. In this paper, we modify the internal structure of LLMs by training sparse autoencoders to learn a sparse representation of the query embedding, allowing precise control over the model's attention distribution. We demonstrate that manipulating this sparse representation effectively transforms the output toward different stylistic and cognitive targets. Specifically, in an educational setting, we show that the cognitive complexity of LLM-generated feedback can be systematically adjusted by modifying the encoded query representation at a specific layer. To achieve this, we guide the learned sparse embedding toward the representation of samples from the desired cognitive complexity level, using gradient-based optimization in the latent space.

View on arXiv
@article{bhattacharyya2025_2502.18644,
  title={ Steered Generation via Gradient Descent on Sparse Features },
  author={ Sumanta Bhattacharyya and Pedram Rooshenas },
  journal={arXiv preprint arXiv:2502.18644},
  year={ 2025 }
}
Comments on this paper