ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.12055
35
0

Designing Role Vectors to Improve LLM Inference Behaviour

17 February 2025
Daniele Potertì
Andrea Seveso
Fabio Mercorio
    LLMSV
ArXivPDFHTML
Abstract

The influence of personas on Large Language Models (LLMs) has been widely studied, yet their direct impact on performance remains uncertain. This work explores a novel approach to guiding LLM behaviour through role vectors, an alternative to persona-based prompting. We construct 29 role vectors derived from model activations and evaluate their impact on benchmark performance across multiple domains. Our analysis investigates whether these vectors can effectively steer models toward domain-specific expertise. We measure two key interventions: (i) activation addition, which reinforces role-specific directions, and (ii) directional ablation, which removes them. Results on well-established benchmarks indicate that role vectors do, in fact, influence model behaviour, improving task performance in relevant domains while marginally affecting unrelated tasks. This, in turn, suggests that manipulating internal model representations has a greater impact on outcomes than persona-based prompting.

View on arXiv
@article{potertì2025_2502.12055,
  title={ Designing Role Vectors to Improve LLM Inference Behaviour },
  author={ Daniele Potertì and Andrea Seveso and Fabio Mercorio },
  journal={arXiv preprint arXiv:2502.12055},
  year={ 2025 }
}
Comments on this paper