Efficient Federated Learning Tiny Language Models for Mobile Network Feature Prediction

In telecommunications, Autonomous Networks (ANs) automatically adjust configurations based on specific requirements (e.g., bandwidth) and available resources. These networks rely on continuous monitoring and intelligent mechanisms for self-optimization, self-repair, and self-protection, nowadays enhanced by Neural Networks (NNs) to enable predictive modeling and pattern recognition. Here, Federated Learning (FL) allows multiple AN cells - each equipped with NNs - to collaboratively train models while preserving data privacy. However, FL requires frequent transmission of large neural data and thus an efficient, standardized compression strategy for reliable communication. To address this, we investigate NNCodec, a Fraunhofer implementation of the ISO/IEC Neural Network Coding (NNC) standard, within a novel FL framework that integrates tiny language models (TLMs) for various mobile network feature prediction (e.g., ping, SNR or band frequency). Our experimental results on the Berlin V2X dataset demonstrate that NNCodec achieves transparent compression (i.e., negligible performance loss) while reducing communication overhead to below 1%, showing the effectiveness of combining NNC with FL in collaboratively learned autonomous mobile networks.
View on arXiv@article{becking2025_2504.01947, title={ Efficient Federated Learning Tiny Language Models for Mobile Network Feature Prediction }, author={ Daniel Becking and Ingo Friese and Karsten Müller and Thomas Buchholz and Mandy Galkow-Schneider and Wojciech Samek and Detlev Marpe }, journal={arXiv preprint arXiv:2504.01947}, year={ 2025 } }