ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.18698
51
0

Wireless Hearables With Programmable Speech AI Accelerators

24 March 2025
Malek Itani
Tuochao Chen
Arun Raghavan
Gavriel Kohlberg
Shyamnath Gollakota
    AuLLM
ArXivPDFHTML
Abstract

The conventional wisdom has been that designing ultra-compact, battery-constrained wireless hearables with on-device speech AI models is challenging due to the high computational demands of streaming deep learning models. Speech AI models require continuous, real-time audio processing, imposing strict computational and I/O constraints. We present NeuralAids, a fully on-device speech AI system for wireless hearables, enabling real-time speech enhancement and denoising on compact, battery-constrained devices. Our system bridges the gap between state-of-the-art deep learning for speech enhancement and low-power AI hardware by making three key technical contributions: 1) a wireless hearable platform integrating a speech AI accelerator for efficient on-device streaming inference, 2) an optimized dual-path neural network designed for low-latency, high-quality speech enhancement, and 3) a hardware-software co-design that uses mixed-precision quantization and quantization-aware training to achieve real-time performance under strict power constraints. Our system processes 6 ms audio chunks in real-time, achieving an inference time of 5.54 ms while consuming 71.6 mW. In real-world evaluations, including a user study with 28 participants, our system outperforms prior on-device models in speech quality and noise suppression, paving the way for next-generation intelligent wireless hearables that can enhance hearing entirely on-device.

View on arXiv
@article{itani2025_2503.18698,
  title={ Wireless Hearables With Programmable Speech AI Accelerators },
  author={ Malek Itani and Tuochao Chen and Arun Raghavan and Gavriel Kohlberg and Shyamnath Gollakota },
  journal={arXiv preprint arXiv:2503.18698},
  year={ 2025 }
}
Comments on this paper