ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.16314
39
1

Steering Large Language Models using Conceptors: Improving Addition-Based Activation Engineering

9 October 2024
Joris Postmus
Steven Abreu
    LLMSV
ArXivPDFHTML
Abstract

Large language models have transformed AI, yet reliably controlling their outputs remains a challenge. This paper explores activation engineering, where outputs of pre-trained LLMs are controlled by manipulating their activations at inference time. Unlike traditional methods using a single steering vector, we introduce conceptors - mathematical constructs that represent sets of activation vectors as ellipsoidal regions. Conceptors act as soft projection matrices and offer more precise control over complex activation patterns. Our experiments demonstrate that conceptors outperform traditional methods across multiple steering tasks. We further use Boolean operations on conceptors for combined steering goals that empirically outperform additively combining steering vectors on a set of tasks. These results highlight conceptors as a promising tool for more effective steering of LLMs. Our code is available onthis http URL.

View on arXiv
@article{postmus2025_2410.16314,
  title={ Steering Large Language Models using Conceptors: Improving Addition-Based Activation Engineering },
  author={ Joris Postmus and Steven Abreu },
  journal={arXiv preprint arXiv:2410.16314},
  year={ 2025 }
}
Comments on this paper