Fundamental Safety-Capability Trade-offs in Fine-tuning Large Language Models

Abstract
Fine-tuning Large Language Models (LLMs) on some task-specific datasets has been a primary use of LLMs. However, it has been empirically observed that this approach to enhancing capability inevitably compromises safety, a phenomenon also known as the safety-capability trade-off in LLM fine-tuning. This paper presents a theoretical framework for understanding the interplay between safety and capability in two primary safety-aware LLM fine-tuning strategies, providing new insights into the effects of data similarity, context overlap, and alignment loss landscape. Our theoretical results characterize the fundamental limits of the safety-capability trade-off in LLM fine-tuning, which are also validated by numerical experiments.
View on arXiv@article{chen2025_2503.20807, title={ Fundamental Safety-Capability Trade-offs in Fine-tuning Large Language Models }, author={ Pin-Yu Chen and Han Shen and Payel Das and Tianyi Chen }, journal={arXiv preprint arXiv:2503.20807}, year={ 2025 } }
Comments on this paper