ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.14026
28
1

Uncovering Latent Chain of Thought Vectors in Language Models

21 September 2024
Jason Zhang
Scott Viteri
    LLMSV
    LRM
ArXivPDFHTML
Abstract

In this work, we examine how targeted perturbations in the activation space of Language Models (LMs) can encode complex reasoning patterns. We inject steering vectors, derived from LM activations, into LMs during inference time and study whether these vectors can induce Chain-of-Thought (CoT) reasoning in LMs without the need for natural language prompting. We demonstrate this approach on Llama3 8B Instruct and Mistral 7B v0.2 Instruct and show that activation-space interventions achieve competitive, if not superior, performance compared to traditional CoT prompting across multiple reasoning benchmarks, including GSM8k, MMLU, AGI Eval, and ARC AI2. These findings suggest that neural network activations can encode reasoning patterns, offering a new application of activation space manipulation as a tool for tuning model behavior.

View on arXiv
@article{zhang2025_2409.14026,
  title={ Uncovering Latent Chain of Thought Vectors in Language Models },
  author={ Jason Zhang and Scott Viteri },
  journal={arXiv preprint arXiv:2409.14026},
  year={ 2025 }
}
Comments on this paper