ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13055
7
0

Simplicity is Key: An Unsupervised Pretraining Approach for Sparse Radio Channels

19 May 2025
Jonathan Ott
Maximilian Stahlke
Tobias Feigl
Bjoern M. Eskofier
Christopher Mutschler
ArXivPDFHTML
Abstract

We introduce the Sparse pretrained Radio Transformer (SpaRTran), an unsupervised representation learning approach based on the concept of compressed sensing for radio channels. Our approach learns embeddings that focus on the physical properties of radio propagation, to create the optimal basis for fine-tuning on radio-based downstream tasks. SpaRTran uses a sparse gated autoencoder that induces a simplicity bias to the learned representations, resembling the sparse nature of radio propagation. For signal reconstruction, it learns a dictionary that holds atomic features, which increases flexibility across signal waveforms and spatiotemporal signal patterns. Our experiments show that SpaRTran reduces errors by up to 85 % compared to state-of-the-art methods when fine-tuned on radio fingerprinting, a challenging downstream task. In addition, our method requires less pretraining effort and offers greater flexibility, as we train it solely on individual radio signals. SpaRTran serves as an excellent base model that can be fine-tuned for various radio-based downstream tasks, effectively reducing the cost for labeling. In addition, it is significantly more versatile than existing methods and demonstrates superior generalization.

View on arXiv
@article{ott2025_2505.13055,
  title={ Simplicity is Key: An Unsupervised Pretraining Approach for Sparse Radio Channels },
  author={ Jonathan Ott and Maximilian Stahlke and Tobias Feigl and Bjoern M. Eskofier and Christopher Mutschler },
  journal={arXiv preprint arXiv:2505.13055},
  year={ 2025 }
}
Comments on this paper