ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.11766
19
0

DPD-NeuralEngine: A 22-nm 6.6-TOPS/W/mm2^22 Recurrent Neural Network Accelerator for Wideband Power Amplifier Digital Pre-Distortion

15 October 2024
Ang Li
Haolin Wu
Yizhuo Wu
Qinyu Chen
Leo C. N. de Vreede
Chang Gao
ArXivPDFHTML
Abstract

The increasing adoption of Deep Neural Network (DNN)-based Digital Pre-distortion (DPD) in modern communication systems necessitates efficient hardware implementations. This paper presents DPD-NeuralEngine, an ultra-fast, tiny-area, and power-efficient DPD accelerator based on a Gated Recurrent Unit (GRU) neural network (NN). Leveraging a co-designed software and hardware approach, our 22 nm CMOS implementation operates at 2 GHz, capable of processing I/Q signals up to 250 MSps. Experimental results demonstrate a throughput of 256.5 GOPS and power efficiency of 1.32 TOPS/W with DPD linearization performance measured in Adjacent Channel Power Ratio (ACPR) of -45.3 dBc and Error Vector Magnitude (EVM) of -39.8 dB. To our knowledge, this work represents the first AI-based DPD application-specific integrated circuit (ASIC) accelerator, achieving a power-area efficiency (PAE) of 6.6 TOPS/W/mm2^22.

View on arXiv
@article{li2025_2410.11766,
  title={ DPD-NeuralEngine: A 22-nm 6.6-TOPS/W/mm$^2$ Recurrent Neural Network Accelerator for Wideband Power Amplifier Digital Pre-Distortion },
  author={ Ang Li and Haolin Wu and Yizhuo Wu and Qinyu Chen and Leo C. N. de Vreede and Chang Gao },
  journal={arXiv preprint arXiv:2410.11766},
  year={ 2025 }
}
Comments on this paper