242
v1v2 (latest)

Fed-SE: Federated Self-Evolution for Privacy-Constrained Multi-Environment LLM Agents

Xiang Chen
Yuling Shi
Qizhen Lan
Yuchao Qiu
Min Wang
Xiaodong Gu
Yanfu Yan
Main:8 Pages
16 Figures
Bibliography:3 Pages
6 Tables
Appendix:14 Pages
Abstract

LLM agents are widely deployed in complex interactive tasks, yet privacy constraints often preclude centralized optimization and co-evolution across dynamic environments. Despite the demonstrated success of Federated Learning (FL) on static datasets, its effectiveness in open-ended, self-evolving agent systems remains largely unexplored. In such settings, the direct application of standard FL is particularly challenging, as heterogeneous tasks and sparse, trajectory-level reward signals give rise to severe gradient instability, which undermines the global optimization process. To bridge this gap, we propose Fed-SE, a Federated Self-Evolution framework for LLM agents that establishes a local evolution-global aggregation paradigm. Locally, agents employ parameter-efficient fine-tuning on filtered, high-return trajectories to achieve stable gradient updates. Globally, Fed-SE aggregates updates within a low-rank subspace, reducing communication cost across clients. Experiments across five heterogeneous environments demonstrate that Fed-SE improves average task success rates by 10\% over the state-of-the-art FedIT, validating its effectiveness in cross-environment knowledge transfer under privacy constraints.

View on arXiv
Comments on this paper