FedIA: Towards Domain-Robust Aggregation in Federated Graph Learning
Federated Graph Learning (FGL) enables a central server to coordinate model training across distributed clients without local graph data being shared. However, FGL significantly suffers from cross-silo domain shifts, where each "silo" (domain) contains a limited number of clients with distinct graph topologies. These heterogeneities induce divergent optimization trajectories, ultimately leading to global model divergence. In this work, we reveal a severe architectural pathology termed Structural Orthogonality: the topology-dependent message passing mechanism forces gradients from different domains to target disjoint coordinates in the parameter space. Through a controlled comparison between backbones, we statistically prove that GNN updates are near-perpendicular across domains (with projection ratios 0). Consequently, naive averaging leads to Consensus Collapse, a phenomenon where sparse, informative structural signals from individual domains are diluted by the near-zero updates of others. This forces the global model into a "sub-optimal" state that fails to represent domain-specific structural patterns, resulting in poor generalization. To address this, we propose FedIA, a lightweight server-side framework designed to reconcile update conflicts without auxiliary communication. FedIA operates in two stages: (1) Global Importance Masking (GIM) identifies a shared parameter subspace to filter out domain-specific structural noise and prevent signal dilution; (2) Confidence-Aware Momentum Weighting (CAM) dynamically re-weights client contributions based on gradient reliability to amplify stable optimization signals.
View on arXiv