ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.06175
27
0

Turbo-ICL: In-Context Learning-Based Turbo Equalization

9 May 2025
Zihang Song
Matteo Zecchin
Bipin Rajendran
Osvaldo Simeone
ArXivPDFHTML
Abstract

This paper introduces a novel in-context learning (ICL) framework, inspired by large language models (LLMs), for soft-input soft-output channel equalization in coded multiple-input multiple-output (MIMO) systems. The proposed approach learns to infer posterior symbol distributions directly from a prompt of pilot signals and decoder feedback. A key innovation is the use of prompt augmentation to incorporate extrinsic information from the decoder output as additional context, enabling the ICL model to refine its symbol estimates iteratively across turbo decoding iterations. Two model variants, based on Transformer and state-space architectures, are developed and evaluated. Extensive simulations demonstrate that, when traditional linear assumptions break down, e.g., in the presence of low-resolution quantization, ICL equalizers consistently outperform conventional model-based baselines, even when the latter are provided with perfect channel state information. Results also highlight the advantage of Transformer-based models under limited training diversity, as well as the efficiency of state-space models in resource-constrained scenarios.

View on arXiv
@article{song2025_2505.06175,
  title={ Turbo-ICL: In-Context Learning-Based Turbo Equalization },
  author={ Zihang Song and Matteo Zecchin and Bipin Rajendran and Osvaldo Simeone },
  journal={arXiv preprint arXiv:2505.06175},
  year={ 2025 }
}
Comments on this paper