60
0

Compositional Subspace Representation Fine-tuning for Adaptive Large Language Models

Abstract

Adapting large language models to multiple tasks can cause cross-skill interference, where improvements for one skill degrade another. While methods such as LoRA impose orthogonality constraints at the weight level, they do not fully address interference in hidden-state representations. We propose Compositional Subspace Representation Fine-tuning (CS-ReFT), a novel representation-based approach that learns multiple orthonormal subspace transformations, each specializing in a distinct skill, and composes them via a lightweight router. By isolating these subspace edits in the hidden state, rather than weight matrices, CS-ReFT prevents cross-task conflicts more effectively. On the AlpacaEval benchmark, applying CS-ReFT to Llama-2-7B achieves a 93.94% win rate, surpassing GPT-3.5 Turbo (86.30%) while requiring only 0.0098% of model parameters. These findings show that specialized representation edits, composed via a simple router, significantly enhance multi-task instruction following with minimal overhead.

View on arXiv
@article{zhou2025_2503.10617,
  title={ Compositional Subspace Representation Fine-tuning for Adaptive Large Language Models },
  author={ Andy Zhou },
  journal={arXiv preprint arXiv:2503.10617},
  year={ 2025 }
}
Comments on this paper