43
0

PluralLLM: Pluralistic Alignment in LLMs via Federated Learning

Abstract

Ensuring Large Language Models (LLMs) align with diverse human preferences while preserving privacy and fairness remains a challenge. Existing methods, such as Reinforcement Learning from Human Feedback (RLHF), rely on centralized data collection, making them computationally expensive and privacy-invasive. We introduce PluralLLM a federated learning-based approach that enables multiple user groups to collaboratively train a transformer-based preference predictor without sharing sensitive data, which can also serve as a reward model for aligning LLMs. Our method leverages Federated Averaging (FedAvg) to aggregate preference updates efficiently, achieving 46% faster convergence, a 4% improvement in alignment scores, and nearly the same group fairness measure as in centralized training. Evaluated on a Q/A preference alignment task, PluralLLM demonstrates that federated preference learning offers a scalable and privacy-preserving alternative for aligning LLMs with diverse human values.

View on arXiv
@article{srewa2025_2503.09925,
  title={ PluralLLM: Pluralistic Alignment in LLMs via Federated Learning },
  author={ Mahmoud Srewa and Tianyu Zhao and Salma Elmalaki },
  journal={arXiv preprint arXiv:2503.09925},
  year={ 2025 }
}
Comments on this paper